Programa Para Atacar Redes Wifi Solapamiento
Otras aplicaciones para hackear redes wifi. Estos son un sistema operativo mas que un programa y las encuentras en Linux.
How Technology Is Crash Landing in Our Public Schools By Do we have an adequate system for sorting through the ten thousand plus different educational technology materials and programs available for integration into our public school systems? In fact, we do not have a system at all. We trust our overworked teachers and their inexpert administrators to find their way (on behalf of our students) through a dazzling forest of finely-tuned product pitches.
Or we leave these crucial decisions in the hands of private sector consultants. So far, even a marriage of our most celebrated tablets (iPads) and our most dominant publisher of educational materials (Pearson) spiraled down into an expensive,. If the US cannot succeed with a one-device-per-child model pinned on our flagship hardware and the largest content behemoth in the backyard of our own tech industry—what’s happening out of the spotlight? If you want to get a keen sense of what’s happening inside the public schools of the United States, you could find no one better to speak with than an experienced school evaluator. The rest of us have to rely on test data or what we hear from students; but professional school evaluators spend their years immersed in classrooms, in conversation with administrators and in assessment of learning materials and methods.
While (for better or for worse) normal public schools in the US are generally not obligated to undergo external evaluations, public charter schools are an exception. Publicly funded charter schools need to satisfy external evaluators every five years in order to have their charters renewed. Adam Aberman has been conducting such evaluations for 12 years, in which time he’s evaluated more than 100 schools and more than 1,000 classroom sessions across 6 states. His evaluations have been divided between primary schools and middle and high schools and approximately one fifth of these evaluations have been of blended learning institutions or blended classrooms (in this context “blended” means “blended with a large dose of technology”). Aberman is also a proponent of educational technologies and a thought leader in the space, having founded and directed icouldbe.org (an online mentoring website) and The Learning Collective, a consulting company that specializes in online and digital learning (and a company at which I am also a Principal). What follows are the highlights of a full interview covering Aberman’s insights into the strengths and weaknesses of blended learning as it currently appears in US public charter schools and the particular challenge of using technology to confer critical thinking skills.
Poor Preparation Aberman notes that a common trend for blended learning schools involves committing to digital learning tools without sufficient research or consideration. On the one hand, schools frequently underestimate the amount of IT hours required to support their use of a given product and the number of reserve devices that may be needed; and on the other hand, schools often discover at the end of their first year using a new digital tool that it doesn’t suit their needs. They are then back at the beginning of the cycle: selecting digital learning tools in a rush for the next academic year and doing so without having comprehensive or objective research to consult. As long as school administrators make big decisions based on the inexpert opinions of one or two colleagues, and as long as they decline to undertake serious needs assessments, publicly-funded classrooms are likely to continue being mismatched with tech solutions. This is a policy level failure to realize that all blended classrooms are effectively piloting new approaches to learning; they are learning laboratories that warrant closer observation and better support—especially in terms of evaluating and assessing the merits of their options.
It makes sense that this step is frequently neglected: it takes time and costs money. Indeed the costs of undertaking a needs assessment and properly vetting a wide variety of edtech tools could completely eliminate any financial savings that a school hopes to gain through deploying technology. Kentaro Toyama has carefully researched the deployment of educational technologies in a variety of contexts; one of his most relevant findings is that broken educational systems cannot be fixed by technology alone, in fact, just as technology can amplify the strengths of a healthy system, it can amplify system dysfunction. Toyama and others draw our attention to the ongoing failure of US public schools. Low Hanging Fruit Where Aberman sees classroom technology at its most successful is in the realm of literacy and numeracy games and applications designed for younger students. Programs of this variety—sharpened over several decades—are often well received and this may be an indication of good things to come. Unfortunately, efforts to use technology for more complex learning goals, according to Aberman, are usually less successful.
While young children often report that literacy and numeracy programs are “fun,” older students using programs in other subjects at a middle or high school level are more likely to report that the tools are “too easy” or “boring”—not surprising, given that many popular programs depend on repetitive question types, varying only their content. There is a worrying possibility that our early experiments with blended and digital learning are actually under-serving students. A recent study showed that in Ohio 75% of brick-and-mortar schools perform well, but only 13% of virtual schools are meeting standards (Tucker et al., 2011, OH Department of Education, 2011)—a quantitative finding to complement Aberman’s qualitative perspective. Critical Thinking Skills Three of Aberman’s observations on critical thinking skills merit direct quotation: “Elementary and middle schools I’ve been in that leverage a lot of technology tend to do an even worse job at promoting students' higher order thinking abilities. At 100% of the approximately twenty blended schools I have evaluated, there is an acknowledged lack of students’ higher-order and critical thinking skills.” “Extended academic discourse and debate in classrooms are also hard to find in many blended schools (they’re hard to find in most schools but especially in blended schools). Students may make short comments in online forums but in-person discussions are often stunted.” “Most blended classrooms I walk into I’m struck by the silence in the room.
Oftentimes, kids are on headphones and, though in the lower grades students are sometimes listening to the correct reading or annunciation of sounds and words, there is little to no discussion between students. In some schools, students are on headphones for the majority of the day.
This means that students, rather than comparing and exploring thoughts in real-time (and fully forming and defending ideas with the depth that at times can only be achieved in person) are often relegated to learning and ideas limited by the digital boundaries created by a software developer.” Sure, teaching critical thinking is challenging in the best of circumstances, but considering that critical thinking is the most foundational skill in the world, we must optimize for it at every possible stage of learning. Proponents of technology are usually fans of problem-solving, believers in engineering and champions of learning.
But are they being overconfident about their products? • Is there an unscientific bravado behind the assumption that uni-variable learning tools will work well across many classrooms without creating atrophy in other skill areas? • Are our educational technologists overlooking social innovations and perhaps weakening our culture of learning? • If incorporating tech into our charter schools is further depressing our learning outcomes for older students, how can we change course? • By designing tech for core standards that overemphasize narrow learning goals, are we missing an opportunity to design more transformative technologies? Obviously, we cannot judge the future of technology on the performance of what we have currently deployed.
We know that technology has the potential to carry ever more weight within a school environment. But, the way we are approaching the integration of technology into our school systems is raising red flags. If we don’t figure out exactly what these early warning signals mean and incorporate their lessons into our design and our educational philosophy, we risk generating backlash and squandering valuable momentum. We also risk producing a generation of graduates who are unprepared for the future ahead of us. Nathaniel Calhoun focuses on the intersection of last mile development work challenges, mobile education for poverty alleviation and ecological design. Follow Nathaniel on Twitter @codeinnovation. Image Credit: Related topics.
You Have a Notification: Welcome to the 24/7 Work Culture By Netflix recently an unlimited paid-leave policy that allows employees to take off as much time as they want during the first year after a child’s birth or adoption. It is trying to one-up tech companies that offer unlimited vacation as a benefit. These are all public-relations ploys and recruiting gimmicks. No employee will spend a year as a full-time parent; hardly any will go on month-long treks to the Himalayas.
Employees will surely take a couple of weeks off, but they will still be working—wherever they are. That is the new nature of work. In the technology industry, it is standard practice for employers to provide cell phones to their employees and to pay for data plans. This is because employees are expected to always be on call and to receive SMS text and emails.
Urgent or not, the emails continue for 24 hours a day—even on weekends. Companies don’t mandate that employees check them, but few dare not to check emails when they are commuting, at home, or on vacation—to make sure that they haven’t messed up. The reality is that there is no 9 to 5 any more. We are always connected, always on, always working—no matter where we are or what industry we are in. Everything is now urgent and problems that previously could have waited until the next day now need to be addressed immediately.
We can debate whether it is good or bad, but these are the new rules of work. Everything changed over the last decade as we became chained to the Internet. These changes are happening globally. Information of all kinds is being digitized: project plans, land records, customer complaints, legal contracts, building designs, and photographs; everything there is. Most of these data are being stored on line—so we can access them wherever we are.
There is no longer an excuse for not working. With digitization, work is also becoming micro-work. A big project becomes a series of small projects that can be done by people in different locations. Accounting firms routinely outsource tax preparation and data analysis; lawyers farm out discovery and contract creation; doctors rely on skilled technicians in other countries to do their radiological analysis. Data handling, website development, design, and transcription are commonly outsourced on sites such as Upwork, Freelancer, and 99Designs. Smaller, micro-tasks are farmed out via sites such as Amazon Mechanical Turk, Samasource, and CrowdFlower. With humanity becoming connected, many good things are becoming possible.
Crowdsourcing is making it possible for people to come together as never before to solve social problems. I saw the possibilities at first hand when using the power of the collective to create a book,, on how we can get more women to participate in the innovation economy. I was able to tap into the knowledge of more than 500 women all over the world. Within six weeks we came to a consensus on the key issues and solutions and gathered enough information to publish not just one but several books. The participants learned from each other, and the quality of the discussion kept increasing. Businesses are beginning to use the power of the collective as well. Rather than locking workers into departmental silos, companies on the cutting edge are using internal social-media sites to have employees communicate with and help each other.
What used to be the quarterly memo from the CEO has become a torrent of sharing, with information being exchanged at all levels of the corporation. Employees can gain access to people they would never have had contact with, including the CEO, and crowdsource solutions to their department’s problems. You also don’t need to be physically present any more to be at work. Telepresence robots are taking video conferencing to a new level.
There are several products on the market, such as by Suitable Technologies and from iRobot, that allow a screen mounted on a mobile platform to move around the office and experience what is happening in a more human way. Imagine walking into your boss’s office while you are on vacation in Disneyland, stepping into a conference room to join a meeting, and then chit-chatting with your peers around the water fountain. Not only has the nature of work changed; so have the rules for getting ahead. Success no longer comes from hoarding knowledge, which was the key to job security in the past; it comes from sharing knowledge and helping the company solve its problems. The hard part is that employees have to take the onus to keep their skills current; they must keep reinventing themselves. They have to keep adapting to the changes that technology is bringing, because the ability to use technology is now a fundamental skill—like reading and writing.
It also no longer matters what degree you have or from what school you graduated; what matters now is how effective you are at getting the job done. And that means staying connected to work constantly. So employees can take as much leave as they want, but the employer’s expectations remain the same: that the job will be done. Think Your Conscious Brain Directs Your Actions?
Think Again By Think your deliberate, guiding, conscious thoughts are in charge of your actions? In a provocative new paper in Behavioral and Brain Sciences, a team led by Dr. Ezequiel Morsella at San Francisco State University came to a startling conclusion: consciousness is no more than a passive machine running one simple algorithm — to serve up what’s already been decided, and take credit for the decision. Rather than a sage conductor, it’s just a tiny part of what happens in the brain that makes us “aware.” All the real work goes on under the hood — in our unconscious minds. The Passive Frame Theory, as Morsella calls it, is based on decades of experimental data observing how people perceive and generate motor responses to odors. It’s not about perception (“I smell a skunk”), but about response (running from a skunk). The key to cracking what consciousness does in the brain is to work backwards from an observable physical action, explains Morsella in his paper.
If this isn’t your idea of “consciousness,” you’re not alone. Traditionally, theorists tried to tackle the enigmatic beast by looking at higher levels of human consciousness, for example, self-consciousness — the knowledge that you exist — or theory of mind — that you and others have differing beliefs, intents, desires and perspectives. While fascinating on a philosophical level, this approach is far too complex to explain on a fundamental level what consciousness is for. Instead, Morsella believes that studying basic consciousness — the awareness of a color, an urge, a sharp pain — is what will lead to a breakthrough. “If a creature has an experience of any kind — something it is like to be that creature — then it has this form of consciousness,” Morsella said in an email to Singularity Hub. It doesn’t have to be high-level, and “ it’s unlikely to be unique to humans.” The Passive Frame Theory goes like this: nearly all the decisions and thoughts that need to be made throughout the day are performed by many parts of the unconscious brain, well below our level of awareness. When the time comes to physically act on a decision, various unconscious processes deliver their opinions to a central “hub,” like voters congregating at town hall.
The hub listens in on the conversation, but doesn’t participate; all it does is provide a venue for differing opinions to integrate and decide on a final outcome. Once the unconscious makes a final decision on how to physically act (or react), the hub — consciousness — executes that work and then congratulates itself for figuring out a tough problem.
In a way, the unconscious mind is like a group of talented ghostwriters working on a movie script for a celebrated screenwriter. If all goes smoothly, they bypass the screenwriter and deliver the final product straight to the next level.
If, on the other hand, conflict arises — say the ghostwriters differ in their ideas on how the story should unfold — their argument may reach the ears of that famous screenwriter, who becomes aware of the problem, but nevertheless sits and waits for the writers to figure it all out. Once that happens, the screenwriter hands off the script, and gets all the credit. Similar to the screenwriter, consciousness doesn’t debate or solve conflict in our heads; consciousness needs to be “on” in order to relay the final outcome — so it is essential — but it doesn’t participate in the nitty-gritty of decision-making. Why did consciousness emerge in this way? Morsella thinks the answer is evolution. Like all animals, humans try to conserve mental energy and automate our biological processes. Most of the time we run on instincts, reflexes and minute-to-minute immediate thoughts.
Take breathing as an example — it’s completely automated, to the point that consciously trying to maintain a steady rhythm is surprisingly hard. In this case, conscious thought just bogs the process down. Unlike most animals, however, humans gradually evolved into complex social beings capable of cultivating our intelligence for language and other higher faculties. Faced with increasingly difficult decisions on how to act, we suddenly needed a middleman to slow our unconscious mind down. Say you find yourself underwater; your instinct is to breathe, but better judgment — delivered by an unconscious cry of alarm (“don’t breathe!”) — tells you that you would drown. Your unconscious mind orders your consciousness to activate the muscles that will allow you to hold your breath and keep you alive.
Consciousness triggers an adaptive motion. The power of our unconscious mind doesn’t stop at basic bodily functions. In the paper, Morsella cites language — a high-level, complex and perhaps distinctively human faculty — as another product of the unconscious mind.
When you speak, you’re only consciously aware of a few words at a time, and that is only so you can direct the muscles around your mouth and tongue to form those words. What you’re saying is prescribed under the hood; your conscious mind is simply following a script. Morsella acknowledges that his theory is unconventional and difficult to accept. 'The number one reason it's taken so long to reach this conclusion is because people confuse what consciousness is for with what they think they use it for,' Morsella said in a accompanying his paper. But none of this theory takes away our treasured qualities as sentient human beings — our imagination, our language, our sense of self and others — it just points to the unconscious mind as the main player on our brainy fields.
In fact, Morsella hopes his theory could lead to new ideas about intrusive thoughts or obsessions that often occur in mental disorders. “The passivity of consciousness explains why we are aware of urges and thoughts that are maladaptive,' Morsella said to Singularity Hub, because it doesn’t know that it shouldn’t be thinking about these thoughts.
“The system is less all-knowing and purposeful than we thought.” Image Credit. George Westerman Communicating IT’s Impact as a Business Enabler by MIT’s Center for Digital Business Your book, The Real Business of IT is about IT, but there's nothing about technology in the book. It's all about communication – what's behind that? So there's a little bit of technology in there, but you're right: it's about technology and it is about communication.
And this was kind of a surprise when we came into the research. I spent a lot of time with non-IT executives through this course that I have, and also through interviews that we did in research.
What we found in our early studies is that the number one driver of their perception of value wasn't from the technology; it was from whether they had effective oversight of IT. Whether they had a transparent view of what's happening in IT, whether they knew what IT was providing, what role they needed to play, and what benefits they were getting. This, over and over again in our studies, showed up as important, and that's when we realized there's something really essential in learning how to communicate the right way.
Please read the attached Q&A file. Genomic Elements Reveal Human Diversity By Anna Azvolinsky Duplication of copy number variants may be the source of greatest diversity among people, researchers find.
World map with geographic coordinates of populations sampled in the study. SUDMANT Genetic differences among ethnically diverse individuals are largely due to structural elements called copy number variants (CNVs), according to a study published today (August 6) in Science. Compared with other genomic features, such as single nucleotide variants (SNVs), CNVs have not previously been studied in as much detail because they are more difficult to sequence. Covering 125 distinct human populations around the world, geneticist Evan Eichler at the University of Washington in Seattle and an international team of colleagues studied the genomes of 236 people—analyzing both SNVs and CNVs.
“The take-home message is that we continue to find a lot more genetic variation between humans than we appreciated previously,” Eichler told The Scientist. “This is a really exciting study of CNVs in worldwide human populations and has a much finer resolution than what had been done before,” said, who studies human genetic variation at the University of California, Los Angeles, and was not involved in the work.
Classified as deletions or duplications, CNVs are genomic loci that can greatly vary in the number of copies, and are often located in regions of highly repetitive content, making them more difficult to sequence compared to SNVs. Thus far, the vast majority of human genome analyses—including from the Human Genome Project and the —have focused on SNVs and CNV deletions; these studies largely overlooked CNV duplications because of technology limitations.
In the present study, the median size of CNVs identified was 7,396 base pairs. “Here, [the authors] put in an extra effort, sequencing each genome much more deeply—about 10 times more than what was done in the 1,000 Genomes Project,” said a geneticist at the Stanford School of Medicine, who was also not involved in the study. “That is a massive achievement in genomic data generation.” From their dataset, the researchers were able to reconstruct the organization of an ancestral human genome—around 200,000 years old—and compare it to the chimpanzee and orangutan reference sequences.
This comparative analysis revealed at least 40 million base pairs of additional DNA in the ancestral human genome reconstruction that are not found in the current human reference genome. A portion of this sequence was retained in the genomes of several modern African people, suggesting the loss of this additional sequence as. Eichler and his colleagues compared the modern human genomes to genomes of three ancient human lineages as well as two extinct lineages—. The researchers found that CNVs were a source of seven times greater diversity compared to SNVs. “While there are fewer CNV events, the number of base pairs that are different between two individuals are largely dictated by CNVs, especially within the duplicated regions,” Eichler explained. This difference between CNVs and SNVs is likely to grow much larger as new sequencing platforms are used to understand human genetic variation, he added. Specifically, CNV duplications were the source of the greatest diversity; these features were four times more likely to affect genes compared to CNV deletions across all populations, suggesting that selection of the duplications and deletions differed throughout evolution.
The researchers also found that CNVs had the greatest effect on genomic diversity among non-African human genomes. The implication, said Eichler, is that during “the last 80,000 years, the genomes of our ancestors that left Africa have gone through much more remodeling by CNVs compared to SNVs,” said Eichler. Next, Eichler’s team would like to compare specific CNV loci among different modern populations to ascertain positive versus negative selection as well as correlations with disease risk.
The study provides clues on how evolution may have acted on different genomic elements, but there’s a lot more to learn, said Lohmueller. “This is a great step in that direction but it’s not the last part of the story on understanding which CNVs in our genomes are neutral or deleterious.” P.H. Sudmant et al., “Global diversity, population stratification, and selection of human copy number variation,” Science, doi:10.1126/science.aab3761, 2015. By JONAH BROMWICH Who is making the most noise about smartphone overuse? It seems that it might be smartphone companies themselves. A released this week by Motorola tried to illustrate that people around the world are obsessed with smartphones.
A recent from also emphasized the amount of time certain users are spending on their phones. Even the company that popularized the smartphone is expressing concern for its customers. In an earlier this year, Kevin Lynch, the man responsible for developing the software for the Watch, seemed concerned about our inability to detach: “We’re so connected, kind of ever-presently, with technology now,” Mr.
“People are carrying their phones with them and looking at the screen so much.” We asked, The Times’s consumer technology expert and the author of a book about constant connectivity, “Always On,” to explain why smartphone companies are so interested in what their products are doing to us. He said that because consumer anxiety about overusing smartphones has lessened, companies feel more comfortable using that line of thinking as a way to market new products. Why are all these studies coming out about smartphone addiction? Do you think it’s something that people are worried about? I think and tried to do this in 2010. They had all these ads come out that said the Windows phone is better than because it helps you get away from your phone more. We made this more streamlined, more efficient and easier to use, so therefore, you’re going to spend less time on your smartphone.
Which, you know, just didn’t turn out to be true. It was just a marketing twist, and we’re seeing more of that now. But there are definitely people who are anxious about using their smartphones too much.
Do you think those people are addicted? Are there legitimate worries about using our phones too much? It’s a case-by-case basis, but there are legitimate concerns for people and their everyday interactions. It’s generally rude to keep looking at your smartphone when you’re at dinner with your friends, for example. When you’re in a relationship, and you’re spending more time with your smartphone than you are with your partner, then you probably have a problem. So I think those are some of the concerns when it comes to our social relationships. Have you ever had anyone complain to you that you’re on your phone too much?
I get more of the criticism for video games. But yes, there are technology issues that come up in my life. I’m addicted to email probably, working at The New York Times where our bosses are just emailing us all day. I kind of have to tell myself to get away from that on the weekend and set my own limitations and restrictions.
How would you suggest people stay away from their phones? The way that I’ve set it up is I only get sound for emails when I get emails from my bosses, and I use the V.I.P. Alerts on the iPhone. I pretty much ignore everything else, unless it’s friends. I turn on “Do Not Disturb” every day from midnight to 6 a.m. Just so I’m not getting crazy email noises while we’re sleeping.
An argument I make in the book is, simply, if it’s really becoming a problem, you can start setting restrictions around yourself. For example, at dinner turn on airplane mode on the iPhone.
And stop looking at your phone; pay attention to your partner and your friends and so on. You can do these things! The had a report a couple of years ago that claimed that smartphones had spread with “unprecedented speed.” Do you think the speed of adoption has contributed to the anxiety? I kind of feel like the opposite: The fact that smartphone adoption is spreading so quickly is kind of proof that people are getting past that anxiety.
Back when the smartphone was still becoming a mainstream thing, that’s when anxiety was at its peak. But there have been very few studies suggesting anything severely negative about using smartphones. There has always been fear of new technology.
The classic example is that Aristotle warned that using the pen to write would make us stop memorizing things, and that certainly turned out not to be true. And then there’s the television. The television is funny in that we’ve all moved on, but that’s one piece of technology that we should actually be more concerned about because it involves a lot of sitting around.
And sitting is really, really bad for you. Scientists show a link between intestinal bacteria and depression by NeuroScientisNews Immunostaining of mouse ilium. Credit: Vasanta Subramanian / Wellcome Images Exploring the role of intestinal microbiota in the altered behavior that is a consequence of early life stress Scientists from the Farncombe Family Digestive Health Research Institute at McMaster University have discovered that intestinal bacteria play an important role in inducing anxiety and depression. The new study, published in Nature Communications, is the first to explore the role of intestinal microbiota in the altered behavior that is a consequence of early life stress. 'We have shown for the first time in an established mouse model of anxiety and depression that bacteria play a crucial role in inducing this abnormal behavior,' said Premysl Bercik, senior author of the paper and an associate professor of medicine with McMaster's Michael G. DeGroote School of Medicine. 'But it's not only bacteria, it's the altered bi-directional communication between the stressed host -- mice subjected to early life stress -- and its microbiota, that leads to anxiety and depression.'
It has been known for some time that intestinal bacteria can affect behavior, but much of the previous research has used healthy, normal mice, said Bercik. In this study, researchers subjected mice to early life stress with a procedure of maternal separation, meaning that from day three to 21, newborn mice were separated for three hours each day from their mothers and then put back with them. First, Bercik and his team confirmed that conventional mice with complex microbiota, which had been maternally separated, displayed anxiety and depression-like behavior, with abnormal levels of the stress hormone corticosterone. These mice also showed gut dysfunction based on the release of a major neurotransmitter, acetylcholine. Then, they repeated the same experiment in germ-free conditions and found that in the absence of bacteria mice which were maternally separated still have altered stress hormone levels and gut dysfunction, but they behaved similar to the control mice, not showing any signs of anxiety or depression. Next, they found that when the maternally separated germ-free mice are colonized with bacteria from control mice, the bacterial composition and metabolic activity changed within several weeks, and the mice started exhibiting anxiety and depression.
'However, if we transfer the bacteria from stressed mice into non stressed germ-free mice, no abnormalities are observed. This suggests that in this model, both host and microbial factors are required for the development of anxiety and depression-like behavior. Neonatal stress leads to increased stress reactivity and gut dysfunction that changes the gut microbiota which, in turn, alters brain function,' said Bercik. He said that with this new research, 'We are starting to explain the complex mechanisms of interaction and dynamics between the gut microbiota and its host. Our data show that relatively minor changes in microbiota profiles or its metabolic activity induced by neonatal stress can have profound effects on host behavior in adulthood.'
Bercik said this is another step in understanding how microbiota can shape host behaviour, and that it may extend the original observations into the field of psychiatric disorders. 'It would be important to determine whether this also applies to humans.
For instance, whether we can detect abnormal microbiota profiles or different microbial metabolic activity in patients with primary psychiatric disorders, like anxiety and depression,' said Bercik. Note: Material may have been edited for length and content. For further information, please contact the cited source.
Memory – illustration. Credit: Neil Webb / Wellcome Images Sleep makes our memories more accessible, study shows by NeuroScientistNews Sleeping not only protects memories from being forgotten, it also makes them easier to access, according to new research from the University of Exeter and the Basque Centre for Cognition, Brain and Language. The findings suggest that after sleep we are more likely to recall facts which we could not remember while still awake. In two situations where subjects forgot information over the course of 12 hours of wakefulness, a night's sleep was shown to promote access to memory traces that had initially been too weak to be retrieved. The research, published in the journal Cortex, tracked memories for novel, made-up words learnt either prior to a night's sleep, or an equivalent period of wakefulness. Subjects were asked to recall words immediately after exposure, and then again after the period of sleep or wakefulness.
The key distinction was between those word memories which participants could remember at both the immediate test and the 12-hour retest, and those not remembered at test, but eventually remembered at retest. The researcher found that, compared to daytime wakefulness, sleep helped rescue unrecalled memories more than it prevented memory loss. Nicolas Dumay of the University of Exeter explains: 'Sleep almost doubles our chances of remembering previously unrecalled material.
The post-sleep boost in memory accessibility may indicate that some memories are sharpened overnight. This supports the notion that, while asleep, we actively rehearse information flagged as important. More research is needed into the functional significance of this rehearsal and whether, for instance, it allows memories to be accessible in a wider range of contexts, hence making them more useful.' The beneficial impact of sleep on memory is well established, and the act of sleeping is known to help us remember the things that we did, or heard, the previous day.
The idea that memories could also be sharpened and made more vivid and accessible overnight, however, is yet to be fully explored. Dr Dumay believes the memory boost comes from the hippocampus, an inner structure of the temporal lobe, unzipping recently encoded episodes and replaying them to regions of the brain originally involved in their capture - this would lead the subject to effectively re-experience the major events of the day. Nicolas Dumay is an experimental psychologist at the University of Exeter and an honorary Staff Scientist at the Basque Centre for Cognition, Brain and Language (BCBL), in Spain. 'Sleep not just protects memories against forgetting, it also makes them more accessible' is published in the journal Cortex. Note: Material may have been edited for length and content. For further information, please contact the cited source.
References • A. Louveau et al. Game Of Thrones Saison 3 Francais Telecharger Utorrent Pour. , “Structural and functional features of central nervous system lymphatic vessels,”, doi:10.1038/nature14432, 2015. Sabin, “On the origin of the lymphatics system from the veins and the development of the lymph hearts and the thoracic duct in the pig,”, 1:367-89, 1902. Huntington, C.F.W. McClure, “The anatomy and development of the jugular lymph sac in the domestic cat (Felis domestica),”, 10:177-312, 1910. Wilting et al., “Dual origin of avian lymphatics,” 292:165-73, 2006. Ny et al., “A genetic Xenopus laevis tadpole model to study lymphangiogenesis,”, 11:998-1004, 2005.
Buttler et al., “Proliferating mesodermal cells in murine embryos exhibiting macrophage and lymphendothelial characteristics,” 8:43, 2008. Buttler et al., “Mesenchymal cells with leukocyte and lymphendothelial characteristics in murine embryos,”, 235:1554-62, 2006. Srinivasan et al., “Lineage tracing demonstrates the venous origin of the mammalian lymphatic vasculature,”, 21:2422-32, 2007. A crucial element for the survival of animals and humans is learning how to acquire rewarding stimuli—food, sex, and social rewards. While learning is powerful skill, nothing in the world remains the same for long, and learning must be adaptive in order to allow an animal to flexibly survive a changing environment.
Dopamine has long been known for its critical role in cue-reward associations, and new data provide a much richer and complex image of how dopaminergic neurons function. Making decisions about impending actions requires an understanding of the expected value of outcomes, relative costs incurred between choices, and the probabilities of achieving the possible outcomes.
The neural activity of midbrain dopamine neurons is thought to play a crucial role by informing the decision process with the values of known expected outcomes, and after decisions, by informing the animal whether the outcome was better, worse, or as expected. Midbrain dopamine neurons are, therefore, critical for invigorating behaviors aimed at acquiring large expected rewards and for adjusting behaviors when the outcomes of decisions are revealed. The Theory In the late 1990’s, scientists were looking for teaching signals in the brain—mechanisms by which learning may take place. One such teaching signal, proposed in the 1970s by Rescorla and Wagner 1, was based on the idea that a fully expected outcome isn’t novel (for example, you walk into a dark room, flick the light switch and a light comes on). For learning to take place, something unexpected must occur. Imagine you have never seen a light switch, and you walk into a dark room, yet you want to turn the lights on. After some fumbling about, you accidentally flick the right switch and—surprise!—the lights turn on.
This is a positive outcome, and also unexpected. According to Rescorla and Wagner, because of the difference between the expected outcome (no change, light stays off) and the actual outcome (big change, lights turn on), this should induce a teaching signal. Further, because the difference between expected and actual outcome was in the desired direction, the outcome was appetitive, and this signal is known as a ‘positive prediction error.' Over time, you learn which switch turns the lights on, and this signal should decrease, as you learn the outcome of flicking that light switch. However one day, you walk into this dark room, flick the switch and—surprise!—no lights come on.
This outcome is surprising, and negative. This is theorized to induce a ‘negative prediction error’, and if this switch continues over time to not turn the lights on, you will learn to avoid it when trying to do so. These are the basics of prediction error signals, that to see a prediction error, outcomes of actions must be unexpected and they can have either an unexpectedly positive or unexpectedly negative value. The Data In the late 1990s, an important discovery was made. Dopamine neurons in the midbrain of monkeys were observed to increase firing to unexpectedly positive outcomes (i.e., the unexpected delivery of a reward).
This increase was transient, as more trials occurred and the reward was predicted, the firing of dopamine neurons decreased. Then, as this outcome became expected, researchers unexpectedly withheld the reward, and dopamine neurons paused firing 2-5. This finding caused a flurry of research and was replicated across species including in humans 6, rat 7 and mice 8. Further, the increases in activity of dopamine neurons to the outcomes shifted, such that a cue which predicted some outcome itself induced firing in dopamine neurons. Suddenly, scientists had identified a plausible neural mechanism of adaptive learning.
More recently, researchers used to verify that the correlations of neural activity which people have observed were causative. Steinberg et al (2013) used a blocking paradigm to directly test the role of prediction error signals from dopamine neurons on learning. A blocking paradigm is a scenario where an animal fails to learn a new piece of information, because this new learning is ‘blocked’ by an old association. For example, after an animal learns that a tone predicts an outcome (water), experimenters pair the tone with another cue (light).
On test day, just the ‘blocked’ cue (the light) is presented to the animal, to see if the animal learned new information. However, because the outcome is exactly the same for the paired light and tone as it was for the tone alone, there should be no difference in the outcome, and no prediction error generated. By optogenetically activating dopamine neurons during the tone and light pairing, Steinberg et al were able to artificially induce a positive prediction error and create new learning, marking the first causative evidence supporting this theory 9. These data solidified the role of midbrain dopamine neurons in signaling positive prediction error, and showed how these prediction error signals can drive behavior. The Unanswered Questions At the same time as the role of midbrain dopamine neurons in signaling prediction errors was being confirmed, other experimental data was developing which suggested the real picture was a little bit less neat. In some recording experiments, some neurons also increased firing for unexpected negative and positive outcomes, contrary to the results described above. These neurons are thought to signal salience, in that an unexpected event, regardless of valence, is possibly highly important and behavior will need to be invigorated to either repeat the actions needed to replicate this unexpected positive outcome (in the event of something appetitive) or escape/avoid this in the future, as in the case of something aversive.
These results were recently identified in behaving primates, working for either a fluid reward or working to avoid a puff of air to the eye 10. Additionally, the locations of dopamine neurons which signal value, salience, and prediction error are not exactly discrete, but instead have a relatively large area of overlap 11. Whether this overlap is a function of neurodevelopment or functional experience remains to be seen. So what role do midbrain dopamine neurons play in decision making?
It appears that some neurons are crucial for providing an alerting signal, identifying when something happened that was unexpected, either appetitive or aversive in nature. At the same time, there are neurons which simultaneously provide the relative valence of this unexpected outcome, signaling whether something good or bad occurred. These two neural populations must work in concert to provide a teaching signal, helping animals learn what cues are predictive of what outcomes, and guide behavior towards the more appetitive options on future decisions. Together, phasic activity of midbrain dopamine neurons serve to alert, teach, and inform, all fundamentally critical functions necessary for an organism to live an adaptable life.
References 1. Rescorla RA & Wagner AR (1972) Classical Conditioning II: Current Research and Theory (Appleton-Century-Crofts. Prokasy) 64–99. Schultz W (1998) Predictive reward signal of dopamine neurons. Journal of Neurophysiology 80:1–27. Schultz W (1999) The Reward Signal of Midbrain Dopamine Neurons.
News in physiological sciences: an international journal of physiology produced jointly by the International Union of Physiological Sciences and the American Physiological Society 14(6):249–255. Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward.
Science 275(5306):1593–1599. Schultz W & Dickinson A (2000) Annual Review of Neuroscience 23:473–500. Doi: 10.1146/annurev.neuro.23.1.473 6.
D'Ardenne K et al. (2008) Science 319(5867):1264–1267.
Doi: 10.1126/science.1150605 7. Roesch MR, Calu DJ, Schoenbaum G (2007) Nature Neuroscience 10:1615–1624.
Doi: 10.1038/nn2013 8. Cohen JY et al. (2012) Nature 482:85–88. Doi: 10.1038/nature10754 9. Steinberg EE et al. (2013) Nature Neuroscience doi: 10.1038/nn.3413 10. Matsumoto M & Hikosaka O (2009) Nature 459:837–841.
Doi: 10.1038/nature08028 11. Bromberg-Martin ES, Matsumoto M, Hikosaka O (2010) Neuron 68:815–834. Doi: 10.1016/j.neuron.2010.11.022 Link. By It’s actually hard to know what to believe about millennials, the Americans born after 1980 who make up the largest generation in history. Every week there’s a —but the most conflicting reports have to do with where they live. Over the last few years, across the country—around the world, too—people of all ages, including millennials, have been moving into cities at an astonishing rate.
Now more than half of the world’s population is urban. So here’s the: Are today’s 20- and 30-somethings really going to live more urban lifestyles than Gen Xers or Boomers? Or are they going to eventually vacate cities for the ‘burbs, just like every generation before them?
Then came some interesting data, pegged to the release of 2014 Census information this spring: Millennials have indeed started —but not necessarily in favor of a quiet rural or suburban life. Instead, we’re seeing a brand new trend: Thanks to the generation’s size and influence, millennials are moving to new places made just for them, by them—revitalizing smaller cities or opting for hybridized urban-burb enclaves where quality of life is the driving force. The Rent Is Too Damn High As the first wave of millennials started to take jobs, it seemed that this generation was dedicated to living and working downtown. According to different studies, millennials were prioritizing dense neighborhoods, helping to fuel the biking boom, even—whoa!—. Companies luring millennials promoted their downtown locations with transit proximity, bike-commuting amenities, and other perks which aligned with these lifestyles. But it turns out that many millennials weren’t ever —they were just putting off the move to the suburbs for a few more years.
A report in May discovered that by investing in car companies and single-family home builders. “Especially in the older millennials, we’re seeing a move towards more traditional patterns, just on a delayed time frame,” Sarah House, an economist at Wells Fargo,. But it’s not only about following “traditional” patterns, it’s also about being priced out of an insanely expensive housing market.
Last month, Bloomberg crunched the most recent US Census data and came up with the, declaring 13 cities completely unaffordable for home ownership based on estimated earnings for the millennials living there. Bloomberg.com chart using data from US Census, Zillow.com and Bankrate.com What’s happening with millennials and cities is really just part of a larger issue around the rising cost of housing that’s pricing out plenty of people in already-expensive cities. Of course moving into a city is going to be extra-unattainable, financially, for someone just starting out in their career. Maybe millennials are actually way smarter than everyone else for not staying in places they can’t realistically afford. But it turns out that millennials are different in the sense that they’re coming up with innovative ideas for how to live well wherever they are—which sometimes means bringing the urban experience with them. Urbanizing the Suburbs According to a recent study by the Urban Land Institute, “,” a nationwide survey of millennials aged 19 to 36 showed that most of them were never living the glamorous downtown life that most stories like to describe.
Only 13 percent of millennials in the survey lived in or near downtowns—the rest lived in other city neighborhoods—or in the suburbs. Where millennials said they lived, based on the ULI’s survey The kinds of places that millennials want to live share a lot of the same characteristics with urban centers—they’re looking for amenities like walkability and public transit. But according to the study, it’s more about relationships and having the time to enjoy those relationships, which doesn’t necessarily mean working long hours to pay the rent in a big city. “Gen Yers want to live where it’s easy to have fun with friends and family, whether in the suburbs or closer in,” says M. Leanne Lachman, one author of the study. “This is a generation that places a high value on work-life balance and flexibility. They will switch housing and jobs as frequently as necessary to improve their quality of life.” Chicago has always been a haven for young professionals.
Its suburbs are also some of the most famous in the country, a sprawling bedroom community that ripples out from the urban core. But just like the way that Chicago’s inner city is focusing on making a lot of the changes that supposedly attract those young professionals—improving public spaces, redesigning pedestrian connections, and building more biking infrastructure—its suburbs are undergoing their own renaissance, too. Opened this year as a dense, transit-accessible, mixed-use development in Glenview, Illinois, a suburb of Chicago These places once known for their large yards and reliance on cars are changing. According to a, development is radically shifting in suburbs, mostly to accommodate millennials: For example, in Wilmette, a town that’s historically limited rentals, the village board preliminarily approved a plan in May to build a 75-unit luxury apartment building with ground-floor stores on Green Bay Road, across from the Metra station.
Units will average about $2,250 a month for 1,000 square feet. The village courted the project in concert with its master plan, which in 2014 raised height limits downtown from three to five stories. These kinds of compact, livable communities that crop up in less-dense areas have all sorts of names, like New Urbanism and “walkable urban places,” but the one that’s stuck recently is “”—and the urban burbs are a new kind of hybridized place made just for millennials. Millennials might not be staying in the urban cores, but rather, they’re helping to remake the urban-like enclaves that allow easy access to the city when they want it. These places where millennials are choosing to live still have the qualities of downtowns—dense housing, transit connections, walkability, good food, great bars—without the high prices of downtowns. Revitalizing Smaller Cities I wanted to know if any of this rang true to actual millennials. So I talked to participants of the, a nonprofit which selects a group of millennial entrepreneurs to travel across the country on an Amtrak train, stopping in cities along the way to meet with civic-minded startups.
Each of the participants are required to crowdfund $5000 to pay for their own participation on the trip, and many end up forming relationships which spin off into new urban-minded projects. Took 25 young innovators on an Amtrak journey across the country in the spring of 2015 There seemed to be a different trend for millennial living that was well-represented by the group. Instead of settling in big, expensive urban centers like New York, San Francisco, and Los Angeles, many of these millennials were leaving to look for opportunities in smaller cities. Nicole Behnke is 24 and works at what she calls a “social architecture agency” in Milwaukee called where she organizes events for young professionals. She has a car, although she doesn’t like to drive it.
She has noticed some of her friends moving out into suburbs, but that’s not what she personally wanted out of a living situation: She recently closed on a condo in downtown Milwaukee. For her, the choice to live in Milwaukee instead of, say, Chicago, or even its urban burbs, came down to more than just price (although that was a big factor). While visiting San Antonio during the Millennial Trains adventure—a place she had always perceived to be in the shadow of Austin—she saw the incredible opportunity offered by her move to Milwaukee. She realized that moving to a smaller city was allowing her to help shape a community.
NEWaukee hosts like networking mixers, neighborhood tours and even a night market to gather millennials and rally civic pride “There is an energy of millennials who are coming together and galvanizing and they want to be the creators,” says Behnke. “Not to take away from what they are doing in places like Austin and Brooklyn, but do you want to participate in their culture, or do you want to be like San Antonio or Milwaukee and be the creator of the culture? A lot of us want to be the creators—we want to be the ones making the change.” After traveling to so many cities and seeing this generation in action, Behnke believes that millennials want to be invested in where they live in a way that their parents were not. Whether it’s driving changes in the suburbs or revitalizing a smaller city, 79 million Americans who want to make their communities even a little bit better is a very good thing for the country—and there might not be a better legacy for any generation in history. Illustration by Pete Ryan Follow the author at @ More. First computers recognized our faces, now they know what we’re doing By We haven't designed fully sentient artificial intelligence just yet, but we're steadily teaching computers how to see, read, and understand our world.
Showed off their 'Deep Dream,' software capable of taking an image and ascertaining what was in it by turning it into a nightmare fusion of flesh and tentacles. The release follows research by scientists from Stanford University, called NeuralTalk, capable of analyzing images and describing them with eerily accurate sentences., the program and the is the work of Fei-Fei Li, director of the Stanford Artificial Intelligence Laboratory, and Andrej Karpathy, a graduate student. Their software is capable of looking at pictures of complex scenes and. A picture of a man in a black shirt playing guitar, for example, is picked out as 'man in black shirt is playing guitar,' while pictures of a black-and-white dog jumping over a bar, a man in a blue wetsuit surfing a wave, and little girl eating cake are also correctly described with a single sentence. In several cases, it's unnervingly accurate. Like Google's Deep Dream, the software uses a neural network to work out what's going on in each picture, comparing parts of the image to those it's already seen and describing them as humans would. Neural networks are designed to be like human brains, and they work a little like children.
Once they've been taught the basics of our world — that's what a window usually looks like, that's what a table usually looks like, that's what a cat who's trying to eat a cheeseburger looks like — then they can apply that understanding to other pictures and video. It's still not perfect. A fully-grown woman gingerly holding a huge donut is tagged as 'a little girl holding a blow dryer next to her head,' while an inquisitive giraffe is mislabeled as a dog looking out of a window.
A cheerful couple in a garden with a birthday cake appears under the heading 'a man in a green shirt is standing next to an elephant,' with a bush starring as the elephant and, weirdly, the cake standing in for the man. But in most cases, these descriptions are secondary guesses — alongside the elephant suggestion, the program also correctly identifies the cake couple as 'a woman standing outside holding a coconut cake with a man looking on.' 'The software easily identifies a dog jumping over a bar.' The incredible amount of visual information on the internet has, until recently, had to be manually labeled in order for it to be searchable.
When Google first built Google Maps, it relied on a team of employees to dig through and check every single entry, humans given the task of looking at every number captured in the world to make sure it denoted a real address. When they were done, and sick of the tiresome job,. Where it had previously taken a team weeks of work to complete the task, Google Brain could transcribe all of the Street View data from France in under an hour. 'I consider the pixel data in images and video to be the dark matter of the Internet,' Li. 'We are now starting to illuminate it.' Leading the charge for that illumination are web giants such as Facebook and Google, who are keen to categorize the millions of pictures and search results they need to sift through. Previous research focused on single object recognition —, a computer taught itself to recognize a cat — but computer scientists have said this misses the bigger picture.
'We've focused on objects, and we've ignored verbs,' Ali Farhadi, computer scientist at the University of Washington, told The New York Times. But more recent programs have focused on more complex strings of data in an attempt trying to teach computers what's happening in a picture rather than simply what's in shot. The Stanford scientists' study uses the kind of natural language we could eventually use to search through image repositories, leading to an easy hypothetical situation where rather than scanning through tens of thousands of family photos, services such as Google Photos can quickly pull up 'the one where the dog is jumping on the couch,' or 'the selfie I took in Times Square.' Search results, too, would benefit from the technology, potentially allowing you to search YouTube or Google for the exact scenes you want, rather than simply finding the pictures or videos their uploaders were mindful enough to correctly label. Neural networks have potential applications out in the real world, too. At CES this year, Nvidia's Jen-Hsun Huang his company's Drive PX, a 'supercomputer' for your car that incorporated 'deep neural network computer vision.'
Using the same learning techniques as other neural networks, Huang said the technology will be able to automatically spot hazards as you drive, warning you of pedestrians, signs, ambulances, and other objects that it's learned about. The neural network means the Drive PX won't need to have reference images for every kind of car — if it's got four wheels like a car, a grille like a car, and a windscreen like a car, it's probably a car.
Larger cars could be SUVs, while cars with lights on top could be police vehicles. Huang's company has been chasing this technology for a while, too, having provided the graphics processing units actually used by the Stanford team. As the technology to automatically work out what's happening in images is progressing rapidly, its leaders are making their efforts available to all. Google's Deep Dream, in particular, has captured the imagination of many with its trippy visual side effects, contorting images into the shapes of dogs and slugs as it attempts to find reference points it understands.
But the proliferation of this machine learning has a creepy side too — if your computer can work out exactly what's happening in your pictures, what happens when it works out exactly what you are? Bioinspired devices are helping robotics engineers to solve many problems Nature just does things better than humans. It is a galling fact for engineers, but it is very often true and, in all honesty, not surprising; given millions of years to work on a problem, the random mutations and gradual change of evolution will often find a solution where mere human ingenuity, even assisted by computers, cannot.
Mimicking nature’s solutions has, therefore, always been a part of the job of an engineer; and robotics, possibly the most important field where engineers try to copy the abilities of living beings, is providing fruitful ground for bioinspired technologies. Investigating nature’s solutions is the preserve of biologists, but their insights into the often surprising and even seemingly perverse ways that organisms achieve what might seem impossible — such as climbing a sheer, smooth surface — can often give engineers ideas for how to solve completely different problems. For example, no starfish has ever tried to lift a pumpkin, but studying how their feet work and allow them to grasp and manoeuvre their limbs over the complex and textured topologies of coral reefs can lead to robots that can handle awkwardly shaped, delicate objects.
George Whitesides’ robot gripper, modelled after a starfish, depends on material properties rather than mechanics George Whitesides of Harvard University is best known as an eminent chemist and author of textbooks, but is currently working on developing bioinspired systems for activities such a grasping and handling. “What we’re doing isn’t biology,” he told a recent meeting at the Royal Society. “For example, the processes – the combination of systems of sensors, muscles and brain [and other organs that process information] – that allow a squid to control its tentacles are still beyond us. All we’re doing is trying to understand the mechanics of a tentacle to the extent that we can mimic some of its characteristics, even if the mechanisms used in that mimicry are unrelated to those used by the squid.” 'No starfish has ever tried to lift a pumpkin, but studying how their feet work can lead to robots that can handle awkwardly shaped, delicate objects'. Systems based on nature are attractive to engineers for several reasons, Whitesides said. “They tend to work well with humans because their functional parts are frequently soft, so they aren’t as hazardous as heavy industrial machinery with fast-moving metal components. Also, they tend to be simpler, because a lot of the time we replace complex electronic or mechanical control systems by simply making use of the properties of the materials of construction and how we actuate them.
That often means they’re relatively cheap, so they can be built for a single use. For example, a soft, bioinspired robot built for the search part of a search and rescue mission – such as for locating survivors of an earthquake in the rubble of collapsed buildings – can just be abandoned in the ruins.” The starting point of Whitesides’ bioinspired work was the starfish, because the star shape was helpful to make a gripper system. “There’s a lot of industrial interest in grippers,” he explained. “Companies such as Amazon are looking for new ways to handle items in their warehouses that have a wide variety of shapes and sizes, so a flexible, versatile gripper that self-modifies to handle the variety of objects is of great interest.”. The body of such robots is made from polydimethylsiloxane, a soft silicone rubber, cast with networks of pneumatic channels and inflatable cells along the fingers.
Simply inflating these chambers causes the fingers to curl up. “They curl from the tip towards the root, and that’s not a result of mechanical actuation; just from the structure and property of the material,” Whitesides said. “You can introduce more structure, like less flexible sections along the length of the finger.
They’d then act like joints; knuckles in the finger.”. 'Companies such as Amazon are looking for new ways to handle items in their warehouses that have a wide variety of shapes and sizes, so a flexible, versatile gripper that self-modifies to handle the variety of objects is of great interest.' George Whitesides, Harvard University The climbing ability of the gecko has intrigued engineers and scientists since Aristotle, and is the subject of research by Mark Cutkosky, a mechanical engineer from Stanford University. Cutkosky noted that the dry adhesion exhibited by gecko's feet (sticking to surfaces without any oil or adhesive) is particularly interesting to the space community, because it is one of the few techniques for sticking that works in a vacuum, at low temperatures, on non-magnetic surfaces and requiring low forces to attach and detach items onto surfaces and off them; as such, they are a subject of research for items such as fuel tanks and solar panels. 'A few synthetic dry adhesives have even demonstrated levels of adhesion that, for small areas and under controlled conditions, considerably exceed those of the gecko,' Cutkosky said. 'However, no synthetic adhesive fully captures the desirable properties of the gecko system for climbing. When the gecko presses its foot down, these spatula edges contact the surface; then, as the animal pulls its foot along the surface, the faces of the spatulae are pulled onto the surface. This allows very small forces called van der Waals interactions – the same tenuous forces that hold liquids and vapours together (for example in clouds) – to act.
Individually the forces are tiny, but the cumulative effect of the spatulae all over the gecko’s foot is sufficient to hold the whole animal against the surface, even if it is vertical or a ceiling. In fact, it is so strong that a gecko can hang from a single toe. One key factor is that it is directional, Cutkosky stressed; it only works when a force is applied pulling from the palm of the ‘hand’ towards the tip of the toes. Otherwise, it isn’t sticky at all.
“If you touch a gecko’s foot, it doesn’t feel sticky, even if it’s sticking to you at the time,” he said. Cutkosky’s initial experimental gecko-climbing robot, which went by the name of Stickybot (perhaps not the best name, because as we have seen geckos aren’t actually sticky), had feet whose soles were made from thin wedges of silicone rubber, 20µm wide at the tip and 80µm long.
In an ‘unloaded’ state, as these feet touch a surface the tips of the wedges come into contact first; hence they are not sticky. But with a small and increasing shear force, the edges bend over, bringing their flat faces against the surface. Tiny wedge-shaped structures in soft materials mimic the setas and spatulae of geckos; an applied shear force F t makes the wedges bend over, bringing their faces unto contaxct with the surface “Thus they represent a greatly simplified analogue to the gecko’s setal stalks and spatulae, which also present a small contact area when unloaded, but flatten out for a much larger contact area when pulled in the preferred direction,” Cutkosky said. “Although the microwedges have a much lower maximum adhesive stress than gecko setae, they are adequate for climbing robots and other applications.” Incorporating a mechanism like this into robots is fruitful both for engineers and biologists, Cutkosky argued. For engineers, it is the only way to proceed. Agreeing with Whitesides, he noted that we can’t possibly replicate how animals operate; we can only approximate. In fact, he said some biomimicking engineers, such as Prof Bob Full, director of the biomechanics laboratory at Berkeley University, argue that engineers shouldn’t try to copy nature at all, because evolution works on the basis of “what is just good enough” and optimises from there: that is not a good way for engineers to develop technologies.
“Because we cannot exactly reproduce complex biological structures, we attempt to identify the most important effects, so we can incorporate them into simplified approximations of what we observe in nature,” Cutkosky said. “We then fabricate robotic mechanisms that embody those principles and test them.” For Stickybot, the team, used a technique called shape deposition manufacturing, which allows them to combine hard and soft polymers with embedded fibres. “It is at this stage that robotics can provide useful information for biologists as well as engineers, because it is much easier to conduct comprehensive tests on robots than on animals,” he said. “Even geckos and insects sometimes aren’t in the mood to co-operate with inquisitive scientists, and you can’t force a lizard to climb a wall when it doesn’t want to. We then analyse the results and invariably have to refine our hypotheses and robotic implementations, and so the cycle repeats.” Mark Cutowsky’s research culminated in Stickybot — a climbing robot which used van der Waals adhesion like a gecko. Some of the gecko’s feats are hard for roboticists to copy; for example, they can run down walls as well as up them, but to do this they have to turn their hind feet around so they face in the opposite direction.
This influences how roboticists have to think around problems, Cutkosky said. “It is useful to recast the observations regarding the gecko’s directional adhesive structures in terms of robot force and motion planning. For robot control, it is useful to think of constraints and regions of allowable forces in a multi-dimensional force space. The objective is to plan force trajectories for the robot foot, so that contact forces remain in a safe region.”.
To give an example of this, he explained that the adhesion force is variable; it depends of how hard the gecko pulls its foot along the wall, and this can also be applied to robot motion. When taking a step up a wall, the gecko [and therefore the robot] has to apply a large shear force for maximum adhesion. When ready to detach its foot at the end of a step, it relaxes the shear force, bringing the combined normal force [pressing the foot into the wall] and shear force [pulling along the wall to seat the spatulae] towards the origin of the plot and allowing it to detach its foot with almost no detachment force. “In practice, adopting this loading and unloading strategy was essential for getting the Stickybot robot to climb smoothly and reliably.” This is where model building and study come in. If a gecko climbs a wall, intuition might say that it presses equally hard with both sets of feet, or presses harder with the feet that are lower down the wall. This is how humans climb, exerting more force with their legs than their arms to gain height. “But this is precisely the wrong strategy,” Cutkosky said.
“Instead, the gecko or robot should pull harder with its front limbs, so that it has more adhesion with which to work.” One way this is now being applied has nothing to do with climbing robots at all; instead, it is being used with micro-UAVs (MAVs) so they can perch on vertical walls, windows and ceilings. This gets around an inevitable limitation of micro-vehicles; they are too small to carry enough power to fly for long. Perching means that they can carry out surveillance or sample the atmosphere without needing to expend too much power. But because gecko-mimicking adhesive is directional, the UAV has to land in such a way that it loads its ‘gecko panel’ to create adhesion – that is, initially face on – then dragging its ‘setae’ along slightly to create the van der Waals interaction. A device to help a micro-air vehle (MAV) stick to a vertical surface consists of two ‘gecko’-pads linked by springs to pull the pads into tension and activate their gripping properties “The vehicle normally flies at speeds of up to 10m/sec and pitches upward to reduce its speed to 1–2m/sec for landing,” said Cutkosky.
“This is still rather fast, but is desirable because it makes the MAV much less vulnerable to air currents than a vehicle hovering adjacent to a wall.” The flying philosophy is like flying the MAV through a funnels in space, where at each stage the goal is to get the craft to the mouth of its next funnel. To make it even more complex, the landing sequence has to incorporate enough time for the setae to bend over. Bringing a wall-clinging MAV in for a landing involves navigating a series of virtual funnels One way to get around this is to design a gripper system with two tiles, each equipped with a gecko-like dry adhesive surface, but arranged so that the adhesive works in the opposite direction: that is, the tiles stick when they are pulled towards each other, and disengages when they are pushed apart. These are linked at the top by a triangular truss made from a spring material, with another linkage acting as a tensioning tendon between the two tiles.
As the MAV lands, it collapses the triangular truss and the tensioning tendon pulls the tiles towards each other. To take off, it first has to press into the surface slightly to disengage the adhesive. There are many differences between natural and engineered gecko adhesives, Cutkosky said – not least the ways the various mechanisms are made. “When making synthetic gecko adhesives, we use bulk manufacturing processes such as lithographic patterning and micro-machining. As we progress from microscopic to macroscopic features, we typically need to employ entirely different processes and machines,” he explained. “New manufacturing and prototyping processes such as micro- machining and shape deposition manufacturing expand our repertoire of materials, dimensional scales and geometries, but do not overcome the limitation that each additional level of hierarchy and complexity is costly.” Whitesides suggested some more areas where bioinspiration might be fruitful. Cockroaches, for example, might be an even better source of inspiration than geckos, especially in search and rescue; they can manoeuvre over a variety of rough terrain, glide into small spaces and even sprint on two legs.
Energy use is another area where nature may have much to teach us: “A pony-sized hard robot, for example, uses about 100 times more energy than a pony to do fewer functions,” he said. We do not understand the constraints to efficiency in biological systems in the detail we understand the thermodynamics of work carried out by mechanical, human-made heat engines.” The way living cells set up energy networks – metabolism – is also beyond us, he added. “These networks are like nothing we can rationally construct.
The central element is the ability of individual reactions to ‘talk to one another’ through environmentally sensitive catalysts [especially enzymes]. How do these networks work, and why are they stable?” This question is, Whitesides concluded, really another way of asking: “What is life?” More: • • • • •. The memory game: Rapid formation of new associations by Greg Bissonette, PhD The formation of memories occurs rapidly, sometimes even after a single experience. Memory formation depends on an area of the brain known as the medial temporal lobe (MTL). However, the mechanism for how memories are so swiftly encoded is largely unknown and large technical challenges generally hinder experiments in awake behaving humans. Using human neurosurgical patients with electrodes already implanted into their MTL for health reasons, Ison, Quiroga, and Fried were able to record from single neurons as they formed new associations using combinations of familiar or novel visual stimuli. By recording the firing of MTL neurons during this process, the authors define a credible neural mechanism for how episodic memories are formed.
In the experiment, participants were asked to view a series of images of people and locations. Their neural activity was recorded and analyzed to determine if any MTL neurons increased firing for an image of a particular landmark or person alone. After identifying neurons which fired for only one of the two conditions (e.g. An image of the White House, but not an image of volleyball player Kerri Walsh), experimenters presented a series of these and other cues to the participant. In one of the conditions, a non-preferred image (e.g. Kerri Walsh) was paired with a preferred image (e.g. White House).
Using these trial-types, researchers were able to determine how MTL neurons fired for their ‘preferred’ image, a ‘non-preferred’ image, and a combination of a non-preferred image with a preferred image. Researchers found that MTL neurons rapidly increased firing to the non-preferred image after experiencing it in conjunction with the preferred image. Neural firing to the preferred image did not change over time, but neural firing for the non-preferred image became significantly elevated after it was paired with the initial preferred image. This was not true of other non-preferred images that were presented alone. Thus, by combining two images in a novel way, MTL neurons rapidly associated one image with another. This association occurred very quickly—within a few trials.
Additionally, neural firing was significantly elevated during free recall in the absence of the visual cues, suggesting a long term formation of the associations. This human study affirms the results of animal recording studies and demonstrates the same effect takes place in humans. Further, these results show an almost effortless formation of new associations which persists during free recall—all key elements of episode memory. Together, these data show a possible mechanism for the rapid inception of new associations in MTL neurons and provide a possible neural substrate for the formation of new memories. Title: Biomedical ecosystem focused on innovative knowledge encoded with SNOMED CT Presenter: Prog.
Gustavo Tejera, SUEIIDISS and KW Foundation October 25th-30th, 2015 - Radisson Montevideo, Uruguay Audience Doctors, Project Managers, Managers of Content’s Departments, Architects of the Digital Society, MBA, Engineers, Analysts of BIG DATA and IoT, Interoperability and Automation Specialists. Objectives Transmitting the key concepts to create and share reusable contents encoded with SNOMED CT. They can improve application logic and training at the point of care, without its creators know programming.
It is the first step towards a social network 3.0 based on SNOMED CT. Abstract SNOMED CT is a source of incremental knowledge to articulate episodes, processes, tasks, forms, descriptors, indicators, rules and agents in all layers of the electronic health record. How to build reusable components with SNOMED CT knowledge to improve the logic of the applications? Web 3.0 is ready to start, but there are difficulties related to the training of professionals, the value of knowledge and market rules. In this study we present the experience of the KW Foundation in the development and implementation of HealthStudio (open source) at the National Cancer Institute, Sanatorium John Paul II, Medical Federation of the Interior, Uruguayan Medical Sanatorium, College of Nursing and Maryland University.
In a BIG DATA composed of more than 20 million log events, we discover how to involve users in the construction process and content that increase both contextual cognition at the attention area, logistical and administrative tasks. With the construction of reusable knowledge of SNOMED CT, the healthcare community gets a great facilitator for the essential alignment ('light'), connection ('camera') and interoperability ('action'). We believe that an innovative, cognitive, community and incremental ecosystem is possible to build on the basis of SNOMED CT, ready for the generation and analysis of BIG DATA and the Internet of Things. But above all, it is essential to ensure that these tools allow the inclusion of all levels in the democratic construction of eHealth and Digital Society. References • SUEIIDISS web page, HL7 Uruguay.
• HL7 International web page:. • UMLS web page:.
• KW Foundation web page, free sources for community’s contents builders. • HealthStudio web page, free downloads for community’s contents builders.
• Book “The New Warriors in the Digital Society”, Gustavo Tejera, 2014. • 2012 Laureate Computerworld Award, KW Social Network initiative.
Embed Interferencias WiFi y No-WiFi.
Ya he capturado algunos y le puse la funcion auto guardar y luego los concatene para hacer un solo archivo y a veces funciona. Y me ha guardado muy pocos paquetes, aunque el contador de paquetes dice que lleva mas de 1000000,,, y al abrir el archivo.cap me salen muchas mac, y la numero 1 es la que mas tiene pero no he tenido mas de 7000 paquetes.
Que estoy haciendo mal?? He usado tambien la opcion de inyectar trafico pero esta igual. Y tambien en los filtros le puse que capturara solo paquetes.