Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem

Free download. Book file PDF easily for everyone and every device. You can download and read online Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem book. Happy reading Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem Bookeveryone. Download file Free Book PDF Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Shattered Nerves: How Science Is Solving Modern Medicines Most Perplexing Problem Pocket Guide.

Most scientists are capable of working out causal mechanisms that have more than one dimension. Some can even handle five! Also, the actual causal mechanisms that scientists investigate are far more complicated than my model allows for. One explanation may be related to many effects, multiple explanations combine with each other nonlinearly, explanations may be correlated, and so forth. Still, there is value in retiring the implicit theory that we should pursue the largest effects most doggedly. I suspect that every scientist has her own a favorite example of the perils of this theory.

In my field, lakes of ink have been spilled attempting to find "the" explanation for why people consider it acceptable to redirect a speeding trolley away from five people and towards one, but not acceptable to hurl one person in front of a trolley in order to stop it from hitting five. This case is alluring because the effect is huge and its explanation is not all obvious.

With the benefit of hindsight, however, there is considerable agreement that it does not have just one explanation. In fact, we have tended to learn more from studying much smaller effects with a key benefit: a sole cause. It is natural to praise research that delivers large effects and the theories that purport to explain them. And this praise is often justified—not least because the world has large problems that demand ambitious scientific solutions. Yet science can advance only at the rate of its best explanations.

Often, the most elegant ones are clothed around effects of modest proportions. Children of the 's like the younger of these two co-authors may fondly remember a TV cartoon called G. Joe , whose closing conceit—a cheesy public service announcement—remains a much-parodied YouTube sensation almost thirty years later.

Following each of these moralizing pronouncements came the show's famous epithet: "Now you know. And knowing is half the battle. While there may be some domains where knowing is half the battle , there are many more where it is not. Recent work in cognitive science has demonstrated that knowing is a shockingly tiny portion of the battle for most real world decisions. You may know a prisoner's guilt is independent of whether you are hungry or not, but she'll still seem like a better candidate for parole when you've recently had a snack.

You may know that a job applicant of African descent is as likely to be qualified as one of European descent, but the negative aspects of the former's resume will still stand out. And you may know that a tasty piece of fudge shaped like dogshit is will taste delicious, but you'll still be pretty hesitant to eat it. The lesson of much contemporary research in judgment and decision-making is that knowledge— at least in the form of our consciously accessible representation of a situation—is rarely the central factor controlling our behavior. The real power of online behavioral control comes not from knowledge, but from things like situation selection, habit formation, and emotion regulation.

This is a lesson that therapy has taken to heart, but one that "pure science" continues to neglect. And so the idea that cognitive science needs to retire is what we'll call the G. Joe Fallacy : the idea that knowing is half the battle. It needs to be retired not just from our theories of how the mind works, but also from our practices of trying to shape minds to work better.


  • John P.A. Ioannidis.
  • Listen Now.
  • Zen and the Heart of Psychotherapy;

You might think that this is old news. After all, thinkers for the last years have been pointing out that much of human action isn't under rational control. Don't we know by now that the G. I Joe Fallacy is just that—a fallacy? The irony is that knowing that the G. Joe Fallacy is a fallacy is—as the fallacy would predict—less than half the battle. Even if you know about this left-digit anchoring effect, the first item will still feel like a significantly better deal. Even if you know about ego depletion effects, the prisoner you encounter after lunch will still seem like a better candidate for parole.

Even if you know that implicit bias is likely to affect your assessment of a resume's quality, you will still experience the candidate with the African-American name as being less qualified than the candidate with the European-American name. And even if you know about Paul Rozin's disgust work, you will still hesitate to drink Dom Perignon out of a sterile toilet bowl. Knowing is not half the battle for most cognitive biases, including the G.

Joe Fallacy. Simply recognizing that the G. Joe Fallacy exists is not sufficient for avoiding its grasp. The Internet scholar Clay Shirky puts it well: "There's no such thing as information overload. There's only filter failure. These aren't trends powered by technology. They are conditions of life. Filters in a digital world work not by removing what is filtered out; they simply don't select for it. The unselected material is still there, ready to be let through by someone else's filter.

Intelligent filters, which is what we need, come in three kinds:. Here's the best definition of information that I know of: information is a measure of uncertainty reduced. It's deceptively simple. In order to have information, you need two things: an uncertainty that matters to us we're having a picnic tomorrow, will it rain? But some reports create the uncertainty that is later to be solved. Suppose we learn from news reports that the National Security Agency "broke" encryption on the Internet.

That's information! It reduces uncertainty about how far the U. All the way. But the same report increases uncertainty about whether there will continue to be a single Internet, setting us up for more information when that larger picture becomes clearer. So information is a measure of uncertainty reduced, but also of uncertainty created.

Which is probably what we mean when we say: "well, that raises more questions than it answers. Filter failure occurs not from too much information but from too much incoming "stuff" that neither reduces existing uncertainty nor raises questions that count for us. The likely answer is to combine the three types of filtering: smart people who do it for us, smart crowds and their choices, smart systems that learn by interacting with us as individuals.

It's at this point that someone usually shouts out: what about serendipity? It's a fair point. We need filters that listen to our demands, but also let through what we have no way to demand because we don't know about it yet. Filters fail when they know us too well and when they don't know us well enough.

The roots of this issue go back at least to , when Rudolf Clausius coined the term "entropy" and stated that the entropy of the universe tends to a maximum. This idea is now known as the second law of thermodynamics, which is most often described by saying that the entropy of an isolated system always increases or stays constant, but never decreases. Isolated systems tend to evolve toward the state of maximum entropy, the state of thermodynamic equilibrium. Even though entropy will play a crucial role in this discussion, it will suffice to use a fairly crude definition: entropy is a measure of the "disorder" of the physical system.

In terms of the underlying quantum description, entropy is a measure of the number of quantum states that correspond to a given description in terms of macroscopic variables, such as temperature, volume, and density. The classic example is a gas in a closed box. If we start with all the gas molecules in a corner of the box, we can imagine watching what happens next. The gas molecules will fill the box, increasing the entropy to the maximum. But it never goes the other way: if the gas molecules fill the box, we will never see them spontaneously collect into one corner.

This behavior seems very natural, but it is hard to reconcile with our understanding of the underlying laws of physics. The gas makes a huge distinction between the past and the future, always evolving toward larger entropy in the future. This one-way behavior of matter in bulk is called the "arrow of time. Any movie of a collision could be played backwards, and it would also show a valid picture of a collision.

To account for some very rare events discovered by particle physicists, the movie is only guaranteed to be valid if it is also reflected in a mirror and has every particle relabeled as the corresponding antiparticle. But these complications do not change the key issue. There is an important problem, therefore, which is over a century old, to understand how the arrow of time could possibly arise from time-symmetric laws of evolution.

The arrow-of-time mystery has driven physicists to seek possible causes within the laws of physics that we observe, but to no avail. The laws make no distinction between the past and the future. Physicists have understood, however, that a low entropy state is always likely to evolve into a higher entropy state, simply because there are many more states of higher entropy. Thus, the entropy today is higher than the entropy yesterday, because yesterday the universe was in a low entropy state.

And it was in a low entropy state yesterday, because the day before it was in an even lower entropy state. The traditional understanding follows this pattern back to the origin of the universe, attributing the arrow of time to some not well-understood property of cosmic initial conditions, which created the universe in a special low entropy state. The egg splatters rather than unsplatters because it is carrying forward the drive toward higher entropy that was initiated by the extraordinarily low entropy state with which the universe began. Based on an elaboration of a proposal by Sean Carroll and Jennifer Chen, there is a possibility of a new solution to the age-old problem of the arrow of time.

This work, by Sean Carroll, Chien-Yao Tseng, and me, is still in the realm of speculation, and has not yet been vetted by the scientific community. But it seems to provide a very attractive alternative to the standard picture. The standard picture holds that the initial conditions for the universe must have produced a special, low entropy state, because it is needed to explain the arrow of time.

No such assumption is applied to the final state, so the arrow of time is introduced through a time-asymmetric condition. We argue, to the contrary, that the arrow of time can be explained without assuming a special initial state, so there is no longer any motivation for the hypothesis that the universe began in a state of extraordinarily low entropy. The most attractive feature is that there is no longer a need to introduce any assumptions that violate the time symmetry of the known laws of physics.

Browse Search

The basic idea is simple. We don't really know if the maximum possible entropy for the universe is finite or infinite, so let's assume that it is infinite. Then, no matter what entropy the universe started with, the entropy would have been low compared to its maximum. That is all that is needed to explain why the entropy has been rising ever since! The metaphor of the gas in a box is replaced by a gas with no box. In the context of what physicists call a "toy model," meant to illustrate a basic principle without trying to be otherwise realistic, we can imagine choosing, in a random and time-symmetric way, an initial state for a gas composed of some finite number of noninteracting particles.

It is important here that any well-defined state will have a finite value for the entropy, and a finite value for the maximum distance of any particle from the origin of our coordinate system. If such a system is followed into the future, the particles might move inward or outward for some finite time, but ultimately the inward-moving particles will pass the central region and will start moving outward. All particles will eventually be moving outward, and the gas will continue indefinitely to expand into the infinite space, with the entropy rising without limit.

An arrow of time—the steady growth of entropy with time—has been generated, without introducing any time-asymmetric assumptions. An interesting feature of this picture is that the universe need not have a beginning, but could be continued from where we started in both directions of time. Since the laws of evolution and the initial state are time-symmetric, the past will be statistically equivalent to the future. Observers in the deep past will see the arrow of time in the opposite direction from ours, but their experience will be no different from ours. The year has just finished and, as is the case at this time of year, media pundits suggest a variety of words and terms that should be banned; some of the most common ones have included, "YOLO," "bromance," "selfie," "mancave," and, of course, please God make it so, "twerking.

Similarly, some things in the science world beg to be retired. That's rarely the case simply because a term has been ubiquitous and irritating. Personally, I hope this phrase won't be retired, as I use it ubiquitously and irritatingly, with no plans to stop otherwise.

However, various science concepts should be retired because they are just plain wrong. An obvious example, more pseudo-science than science, is that evolution is "just" a theory. But what I am focusing on is a phrase that is right in the narrow sense, but carries very wrong connotations. This is the idea of "a gene-environment interaction. The notion of the effects of a particular gene and of a particular environment interacting was a critical counter to the millennia-old dichotomy of nature versus nurture.

Its utility in that realm most often took the form of, "It may not be all genetic—don't forget that there may be a gene-environment interaction," rather than, "It may not be all environmental—don't forget that there may be a gene-environmental interaction. The concept was especially useful when expressed quantitatively, in the face of behavior geneticist's attempts to attribute percentages of variability in a trait to environment versus to genes.

It also was the basis of a useful rule of thumb phrase for non-scientists — "But only if. In that case, you have something called a gene-environment interaction. What's wrong with any of that? It's an incalculably large improvement over "nature or nurture? My problem with the concept is with the particularist use of "a" gene-environment interaction, the notion that there can be one. This is because, at the most benign, this implies that there can be cases where there aren't gene-environment interactions. Worse, that those cases are in the majority. Worst, the notion that lurking out there is something akin to a Platonic ideal as to every gene's actions—that any given gene has an idealized effect, that it consistently "does" that, and that circumstances where that does not occur are rare and represent either pathological situations or inconsequential specialty acts.

Thus, a particular gene may have a Platonically "normal" effect on intelligence unless, of course, the individual was protein malnourished as a fetus, had untreated phenylketonuria, or was raised as a wild child by meerkats. The problem with "a" gene-environment interaction is that there is no gene that does something. It only has a particular effect in a particular environment, and to say that a gene has a consistent effect in every environment is really only to say that it has a consistent effect in all the environments in which it has been studied to date.

This has become ever more clear in studies of the genetics of behavior, as there has been increasing appreciation of environmental regulation of epigenetics, transcription factors, splicing factors, and so on. And this is most dramatically pertinent to humans, given the extraordinary range of environments—both natural and culturally constructed—in which we live. Gordon Moore's paper stating that the number of transistors on integrated circuits will double every two years has become the most popular scientific analogy of the digital age. Despite being a mere conjecture it has become the go-to model to frame complex progress in a simple formula.

There are good technological reasons to retire Moore's Law. For example the general consensus that Moore's Law will effectively cease to exist past a transistor size smaller than 5 nanometers. That would mean a peak and sharp drop-off in ten to twenty years. Another one is the potential of quantum computers pushing computing into new realms, expected to become reality in three to five years.

But Moore's Law should be retired before its technological limits, because it has propelled the perception of progress into wrong directions. Allowing its end to become an event would just amplify the errors of reasoning. First and foremost Moore's Law has allowed to perceive the development of the digital era as a linear narrative. The simple curve of progression is the digital equivalent of the ancient wheat and chessboard problem with a potentially infinite chessboard.

Like the Persian inventor of the game of chess who demanded from the king a geometric progression of grains all across the board, digital technology seems to develop exponentially. This model ignores the parallel nature of digital progress, which encompasses not only technological or economic development, but scientific, social and political change. Changes that can rarely be quantified. Still the Moore's law model of perception has already found its way into the narrative of biotechnological history, where change become ever more complex.

Proof of progress is claimed in the simplistic reasoning of a sharp decline in cost for sequencing a human genome from three billion Dollars in the year to the August cancellation of the Genomics X Prize for the first Dollar genome, because the challenge had been outpaced by innovation. For both digital and biotechnical history the linear narrative has been insufficient. The prowess of the integrated circuit has been the technological spark to induce a massive development comparable with the wheel allowing the rise of urban society. Both technologies have been perfected over time, but their technological refinement falls short to illustrate the impact both had.

It is about 25 years ago that scientists at MIT's media lab told me about a paradigmatic change in computer technology. In the future, they said, the number of other computers connected to a computer will be more important than its number of transistors on its integrated circuits. For a writer interested but not part of the forefront of computer technology that was still groundbreaking news in A few years later the demo of a Mosaic browser was as formative as listening to the first Beatles record and seeing the first man on the moon had been for my parents. Change since have been so multilayered, interconnected and rapid that comprehension has lagged behind ever since.

Scientific, social and political changes occur in random patterns. Results have been mixed in equally random patterns. The slowdown of the music industry and media has not been matched in the publishing industry and film. The failed twitter revolution of Iran had quite a few things in common with the Arab spring, but even in the Maghreb the results differed wildly. Social networks have impacted societies sometimes in exact opposites—while the fad of social networks have resulted in cultural and isolation in Western society, it has created a counterforce of collective communication against the strategies of the Chinese party apparatus to isolate its citizenry from within.

Most of these phenomena have been only observed, not explained by now. It is mostly in hindsight that a linear narrative is constructed, if not imposed on. The inability of many of the greatest digital innovations like viral videos or social networks to be monetized are just one of many proofs how difficult it is to get a comprehensive grasp on digital history. Moore's law and its numerous popular applications to other fields of progress thus create an illusion of predictability in the least predictable of all fields—the course of history.

These errors of reasoning will be amplified, if Moore's Law is allowed to come to its natural end. Peak theories have become the lore of cultural pessimism. If Moore's law is allowed to become a finite principle, digital progress will be perceived as a linear progression towards a peak and an end.

DC's Improbable Science page

Neither will become a reality, because the digital is not a finite resource, but an infinite realm of mathematical possibilities reaching out into the analog world of sciences, society, economics and politics. Because this progress has ceased to depend on quantifiable basis and on linear narratives it will not be brought to a halt, not even slowed down, if one of its strains comes to an end.

Moore's will create the disillusionment of a finite nature of the digital. It will become as popular as its illusion of predictability. After all there have bee no loonies carrying signs saying "The End is Not Near". In the late summer of , as European civilization began its extended suicide, dissenters were scarce. On the contrary: From every major capital, we have jerky newsreel footage of happy crowds, cheering in the summer sunshine.

More war and oppression followed in subsequent decades, and there was never a shortage of willing executioners and obedient lackeys. By mid-century, the time of Stalin and Mao and their smaller-bore imitators, it seemed urgent to understand why people throughout the 20 th century had failed to rise up against masters who sent them to war, or to concentration camps, or to the gulag.

So social scientists came up with an answer, which was then consolidated and popularized into something every educated person supposedly knows: People are sheep—cowardly, deplorable sheep. This idea, that most of us are unwilling to "think for ourselves," instead preferring to stay out of trouble, obey the rules, and conform, was supposedly established by rigorous laboratory experiments. Worse yet, it's rampant in the conversation of educate laypeople—politicians, voters, government officials.

Yet it is false. It makes for bad assumptions and bad policies. It is time to set it aside. Some years ago, the psychologists Bert Hodges and Anne Geyer examined one of Asch's own experiments from the s. He'd asked people to look at a line printed on a white card and then tell which of three similar lines was the same length.

Each volunteer was sitting in a small group, all of whose other members were actually collaborators in the study, deliberately picking wrong answers. Asch reported that when the group chose the wrong match, many individuals went along, against the evidence of their own senses. But the experiment actually involved 12 separate comparisons for each subject, and most did not agree with the majority, most of the time.

In fact, on average, each person agreed three times with the majority, and insisted on his own view nine other times. To make those results all about the evils of conformity is to say, as Hodges and Geyer note, that "an individual's moral obligation in the situation is to 'call it as he sees it' without consideration of what others say. To explain their actions, the volunteers didn't indicate that their senses had been warped or that they were terrified of going against consensus. Instead, they said they had chosen to go along that one time. It's not hard to see why a reasonable person would do so.

The "people are sheep" model sets us up to think in terms of obedience or defiance, dumb conformity versus solitary self-assertion to avoid being a sheep, you must be a lone wolf. It does not recognize that people need to place their trust in others, and win the trust of others, and that this guides their behavior. Stanley Milgram's famous experiments, where men were willing to give severe shocks to a supposed stranger, are often cited as Exhibit A for the "people are sheep" model.

But what these studies really tested was the trust the subjects had in the experimenter. Indeed, questions about trust in others—how it is won and kept, who wins it and who doesn't—seem to be essential to understanding how collectives of people operate, and affect their members. What else is at work? It appears that behavior is also susceptible to the sort of moment-by-moment influences that were once considered irrelevant noise for example, divinity students in a rush were far less likely to help a stranger than were divinity students who were not late, in an experiment performed by John M.

Darley and Dan Batson. And then there is mounting evidence of influences that discomfit psychologists because there doesn't seem to be much psychology in them at all. For example, Neil Johnson of the University of Miami and Michael Spagat of University College London and their colleagues have found the severity and timing of attacks in many different wars different actors, different stakes, different cultures, different continents adheres to a power law. If that's true, then an individual fighter's motivation, ideology, and beliefs make much less difference than we think for the decision to attack next Tuesday.

Or, to take another example, if as Nicholas Christakis' work suggests, your risks of smoking, getting an STD, catching the flu or being obese depend in part on your social network ties, then how much difference does it make what you, as an individual, feel or think? Perhaps the behavior of people in groups will eventually be explained as a combination of moment-to-moment influences like waves on the sea and powerful drivers that work outside of awareness like deep ocean currents.

All the open questions are important and fascinating. But they're only visible after we give up the simplistic notion that we are sheep. It is a commonly held but erroneous belief that a larger study is always more rigorous or definitive than a smaller one, and a randomized controlled trial is always the gold standard. However, there is a growing awareness that size does not always matter and a randomized controlled trial may introduce its own biases.

We need more creative experimental designs. In any scientific study, the question is: "What is the likelihood that observed differences between the experimental group and the control group are due to the intervention or due to chance? A randomized controlled trial RCT is based on the idea that if you randomly-assign subjects to an experimental group that receive an intervention or to a control group that does not, then any known or unknown differences between the groups that might bias the study are as likely to affect one group as another.

While that sounds good in theory, in practice a RCT can often introduce its own set of biases and thus undermine the validity of the findings. For example, a RCT may be designed to determine if dietary changes may prevent heart disease and cancer. Investigators identify patients who meet certain selection criteria, e.

When they meet with prospective study participants, investigators describe the study in great detail and ask, "If you are randomly-assigned to the experimental group, would you be willing to change your lifestyle? However, if that patient is subsequently randomly-assigned to the control group, it is likely that this patient may begin to make lifestyle changes on their own, since they have already been told in detail what these lifestyle changes are.

If they're studying a new drug that only is available to the experimental group, then it is less of an issue. But in the case of behavioral interventions, those who are randomly-assigned to the control group are likely to make at least some of these changes because they believe that the investigators must think that these lifestyle changes are worth doing or they wouldn't be studying them.

Or, they may be disappointed that they were randomly-assigned to the control group, and so they are more likely to drop out of the study, creating selection bias. Also, in a large-scale RCT, it is often hard to provide the experimental group enough support and resources to be able to make lifestyle changes. As a result, adherence to these lifestyle changes is often less than the investigators may have predicted based on earlier pilot studies with smaller groups of patients who were given more support.

The net effect of the above is to a reduce the likelihood that the experimental group will make the desired lifestyle changes, and b increase the likelihood that the control group will make similar lifestyle changes. This reduces the differences between the groups and makes it less likely to show statistically significant differences between them. As a result, the conclusion that the intervention had no significant effect may be misleading.

This is known as a "type 2 error" meaning that there was a real difference but these design issues obscured the ability to detect them. That's just what happened in the Women's Health Initiative study, which followed nearly 49, middle-aged women for more than eight years. The women in the experimental group were asked to eat less fat and more fruits, vegetables, and whole grains each day to see if it could help prevent heart disease and cancer. The women in the control group were not asked to change their diets. However, the experimental group participants did not reduce their dietary fat as recommended—over 29 percent of their diet was comprised of fat, not the study's goal of less than 20 percent.

Also, they did not increase their consumption of fruits and vegetables very much. In contrast, the control group reduced its consumption of fat almost as much and increased its consumption of fruits and vegetables, diluting the between-group differences to the point that they were not statistically significant. The investigators reported that these dietary changes did not protect against heart disease or cancer when the hypothesis was not really tested.

Paradoxically, a small study may be more likely to show significant differences between groups than a large one. The Women's Health Initiative study cost almost a billion dollars yet did not adequately test the hypotheses. A smaller study provides more resources per patient to enhance adherence at lower cost. Also, the idea in RCTs that you're changing only one independent variable the intervention and measuring one dependent variable the result is often a myth.

For example, let's say you're investigating the effects of exercise and its effects on preventing cancer. You devise a study whereby you randomly assign one group to exercise and the other group to no exercise. On paper, it appears that you're only working with one independent variable. In actual practice, however, when you place people on an exercise program, you're not just getting them to exercise; you're actually affecting other factors that may confound the interpretation of your results even if you're not aware of them.

For example, people often exercise with other people, and there's increasing evidence that enhanced social support significantly reduces the risk of most chronic diseases. You're also enhancing a sense of meaning and purpose by participating in a study, and these also have therapeutic benefits.

Victor D. Chase

And when people exercise, they often begin to eat healthier foods. We need new, more thoughtful experimental designs and systems approaches that take into account these issues. Also, new genomic insights will make it possible to better understand individual variations to treatment rather than hoping that this variability will be "averaged out" by randomly-assigning patients. The world's languages differ to the point of inscrutability. Knowing the English word "duck" doesn't help you guess the French "canard" or Japanese "ahiru. For instance, human languages tend to have parts of speech like nouns and verbs.

They tend to have ways to embed propositions in other ones. But why? An influential and appealing explanation is known as Universal Grammar : core commonalities across languages exist because they are part of our genetic endowment. On this view, humans are born with an innate predisposition to develop languages with very specific properties. Infants expect to learn a language that has nouns and verbs, that has sentences with embedded propositions, and so on. This could explain not only why languages are similar but also what it is to be uniquely human and indeed how children acquire their native language.

It may also seem intuitively plausible, especially to people who speak several languages: If English and Spanish… and French! To date, Universal Grammar remains one of the most visible products of the field of Linguistics—the one minimally counterintuitive bit that former students often retain from an introductory Linguistics class. But evidence has not been kind to Universal Grammar. Over the years, field linguists they're like field biologists with really good microphones have reported that languages are much more diverse than originally thought.

Not all languages have nouns and verbs.


  • Gardening with free-range chickens for dummies.
  • Chlorinated Solvent Source Zone Remediation.
  • The Foot in Diabetes?
  • Opinion - NUI Galway.
  • Fitness and Exercise Sourcebook (4th edition).
  • Havenly Cookies Cookbook.
  • Optomechatronics: Fusion of Optical and Mechatronic Engineering.

Nor do all languages let you embed propositions in others. And so it has gone for basically every proposed universal linguistic feature. The empirical foundation has crumbled out from under Universal Grammar. We thought that there might be universals that all languages share and we sought to explain them on the basis of innate biases. But as the purportedly universal features have revealed themselves to be nothing of the sort, the need to explain them in categorical terms has evaporated.

As a result, what can plausibly make up the content of Universal Grammar has become progressively more and more modest over time. At present, there's evidence that nothing but perhaps the most general computational principles are part of our innate language-specific human endowment. So it's time to retire Universal Grammar. It had a good run, but there's nothing much it can bring us now in terms of what we want to know about human language. It can't reveal much about how language develops in children—how they learn to articulate sounds, to infer the meanings of words, to put together words into sentences, to infer emotions and mental states from what people say, and so on.

And the same is true for questions about how humans have evolved or how we differ from other animals. There are ways in which humans are unique in the animal kingdom and a science of language ought to be trying to understand these. But again Universal Grammar, gutted by evidence as it has been, will not help much.

Of course, it remains important and interesting to ask what commonalities, superficial and substantial, tie together the world's languages. There may be hints there about how human language evolved and how it develops. But to ignore its diversity is to set aside the most informative dimension of language. If one views science as an economist, it would stand to reason that the scientific theory that should be first retired would be the one that offers the greatest opportunity for arbitrage in the market place of ideas. Thus it is not sufficient to look for ideas which are merely wrong, as we should instead look for troubled scientific ideas that block progress by inspiring zeal, devotion, and what biologists politely term 'interference competition' all out of proportion to their history of achievement.

Here it is hard to find a better candidate for an intellectual bubble than that which has formed around the quest for a consistent theory of everything physical, reinterpreted as if it were synonymous with 'quantum gravity'. If nature were trying to send a polite message that there is other preliminary work to be done first before we quantize gravity, it is hard to see how she could send a clearer message than dashing the Nobel dreams for two successive generations of Bohr's brilliant descendants.

To recall, modern physics rests on a stool with three classical geometric legs first fashioned individually by Einstein, Maxwell, and Dirac.

See a Problem?

The last two of those legs can be together retrofitted to a quantum theory of force and matter known as the 'standard model', while the first stubbornly resists any such attempt at an upgrade, rendering the semi-quantum stool unstable and useless. It is from this that the children of Bohr have derived the need to convert the children of Einstein to the quantum religion at all costs so that the stool can balance. But, to be fair to those who insist that Einstein must be now made to bow to Bohr, the most strident of those enthusiasts have offered a fair challenge.

Quantum exceptionalists claim, despite an unparalleled history of non-success, that string theory now rebranded as M-theory for matrix, magic or membrane remains literally 'the only game in town' because fundamental physics has gotten so hard that no one can think of a credible alternate unification program.


  • Navigation menu?
  • Growing local : case studies on local food supply chains;
  • Grapes into Wine!
  • Search form.
  • Hitlers Scientists: Science, War, and the Devils Pact.

If we are to dispel this as a canard, we must make a good faith effort to answer the challenge by providing interesting alternatives, lest we be left with nothing at all. My reason for believing that there is a better route to the truth is that we have, out of what seems to be misplaced love for our beloved Einstein, been too reverential to the exact form of general relativity. For example, if before retrofitting we look closely at the curvature and geometry of the legs, we can see something striking, in that they are subtly incompatible at a classical geometric level before any notion of a quantum is introduced.

Einstein's leg seems the sparest and sturdiest as it clearly shows the attention to function found in the school of 'intrinsic geometry' founded by the German Bernhard Riemann. The Maxwell and Dirac legs are somewhat more festive and ornamented as they explore the freedom of form which is the raison d'etre for a more whimsical school of 'auxiliary geometry' pioneered by Alsatian Charles Ehresmann.

This leads one naturally to a very different question: what if the quantum incompatibility of the existing theories is really a red herring with respect to unification and the real sticking point is a geometric conflict between the mathematicians Ehresmann and Riemann rather than an incompatibility between the physicists Einstein and Bohr?

Even worse, it could be that none of the foundations are ready to be quantized. What if all three theories are subtly incomplete at a geometric level and that the quantum will follow once, and only once, all three are retired and replaced with a unified geometry? If such an answer exists, it cannot be expected to be a generic geometric theory as all three of the existing theories are each, in some sense, the simplest possible in their respective domains. Such a unified approach might instead involve a new kind of mathematical toolkit combining elements of the two major geometric schools, which would only be relevant to physics if the observed world can be shown to be of a very particular subtype.

Happily, with the discoveries of neutrino mass, non-trivial dark energy, and dark matter, the world we see looks increasingly to be of the special class that could accommodate such a hybrid theory. One could go on in this way, but it is not the only interesting line of thinking. While, ultimately, there may be a single unified theory to summit, there are few such intellectual peaks that can only be climbed from one face.

We thus need to return physics to its natural state of individualism so that independent researchers need not fear large research communities who, in the quest for mindshare and resources, would crowd out isolated rivals pursuing genuinely interesting inchoate ideas that head in new directions. Established string theorists may, with a twinkle in the eye, shout, 'predictions! Yet potentially rival 'infant industry' research programs, as the saying goes, do not die in jest but in earnest. Given the history of scientific exceptionalism surrounding quantum gravity research, it is neither desirable nor necessary to retire M-theory explicitly, as it contains many fascinating ideas.

Instead, one need only insist that the training wheels that were once customarily circulated to new entrants to reinvigorate the community, be transferred to emerging candidates from those who have now monopolized them for decades at a time. We can then wait at long last to see if 'the only game in town', when denied the luxury of special pleading by senior boosters, has the support from nature to stay upright.

It was born out of a mistranslation and has been misused ever since Let's say you are a scientist and you noticed a phenomenon you would like to tell the world about. At best it can try to switch back and forth quickly, trying to keep up with the information. So much for your theory—you formulate your findings and share it with colleagues, it gets argued and debated, just as it should be. But now something odd happens: while all your discussions were in English, and you wrote it in English, and despite the fact that a large percentage of the leading scientists and Nobel Laureates are English speaking There is a group in Ulaan Baatar, merrily taking your findings with great interest and your whole theory shows up all over the place But here is the catch: you wrote that it is not possible to listen to two conversations at the same time, and thus their meaning to you is, well, undefined, until you decide to follow one of them properly.

However, as it turns out, Mongolian has no such word—"undefined"! Instead it got translated with an entirely different term: "uncertain", and the general interpretation of your theory has suddenly mutated from "one or the other of two conversations will be unknown to you" to the rather distinctly altered interpretation "you can listen to one, but the other will be Saying that I am "unable to understand" both of them properly is one thing, but All of this is of course just an analogy.

But it is pretty close to exactly what did happen—just the other way round: the scientist was Werner Heisenberg. His observation was not about listening to simultaneous conversations but measuring the exact position and momentum of a physical system, which he described as impossible to determine at the same time. And what followed is really quite close to the analogy as well: rather than stating that either position or momentum are "as yet undetermined", it became common usage and popular wisdom to jump to the conclusion that there is complete "uncertainty" at the fundamental level of physics, and nature, even free will and the universe as such.

Laplace's Demon killed as collateral damage obviously his days were numbered anyway Einstein remained skeptical his entire life: to him the "Unbestimmtheit" Indeterminacy was on the part of the observer: not realizing certain aspects of nature at this stage in our knowledge—rather than proof that nature itself is fundamentally undetermined and uncertain.

In particular implications like the "Fernwirkung" action at a distance appeared to him "spukhaft" spooky, eerie. But even in the days of quantum computing, qbits and tunnelling effects, I still would not want to bet against Albert ; His intuitive grasp of nature survived so many critics and waves of counter-proof ended up counter-counter-proved. And while there is plenty of reason to defend Heisenbergs findings, it is sad to see such a profound meme in popular science, which is merely based on a loose attitude towards translation and there are many other such cases I would love to encourage writers in French or Swedish or Arabic to point out the idiosyncracies and unique value of those languages—not for semantic pedantry but the benefit of alternate approaches.

It is like a different tool to apply to thinking—and that's a good thing: a great hammer is a terrible saw. No, I don't literally mean that we should stop believing in, or collecting, Big Data. But we should stop pretending that Big Data is magic. There are few fields that wouldn't benefit from large, carefully collected data sets. But lots of people, even scientists, put more stock in Big Data than they really should.

Sometimes it seems like half the talk about understanding science these days, from physics to neuroscience, is about Big Data, and associated tools like "dimensionality reduction", "neural networks", "machine learning algorithms" and "information visualization". Big Data is, without a doubt, the idea of the moment. Forbes had an article about Big Data a few hours before that.

But science still revolves, most fundamentally, around a search of the laws that describe our universe. And the one thing that Big Data isn't particularly good at is, well, identifying laws. Big Data is brilliant at detecting correlation; the more robust your data set, the better chance you have of identifying correlations, even complex ones involving multiple variables. But correlation never was causation, and never will be.

To really understand the relation between smoking and cancer, you need to run experiments, and develop mechanistic understandings of things like carcinogens, oncogenes, and DNA replication. Merely tabulating a massive database of every smoker and nonsmoker in every city in the world, with every detail about when they smoked, where they smoked, how long they lived, and how they died would not, no matter how many terabytes it occupied, be enough to induce all the complex underlying biological machinery.

If it makes me nervous when people in the business world put too much faith in Big Data, it makes me even more nervous to see scientists do the same. Certain corners of neuroscience have taken on an "if we build it, they will come" attitude, presuming that neuroscience will sort itself out as soon as we have enough data. It won't. If we have good hypotheses, we can test them with Big Data, but Big Data shouldn't be our first port of call; it should be where we go once we know what are looking for.

Physics has a time-honored tradition of laughing in the face of our most basic intuitions. Einstein's relativity forced us to retire our notions of absolute space and time, while quantum mechanics forced us to retire our notions of pretty much everything else. Still, one stubborn idea has stood steadfast through it all: the universe. Sure, our picture of the universe has evolved over the years—its history dynamic, its origin inflating, its expansion accelerating. It has even been downgraded to just one in a multiverse of infinite universes forever divided by event horizons. But still we've clung to the belief that here, as residents in the Milky Way, we all live in a single spacetime, our shared corner of the cosmos—our universe.

In recent years, however, the concept of a single, shared spacetime has sent physics spiraling into paradox. The first sign that something was amiss came from Stephen Hawking's landmark work in the s showing that black holes radiate and evaporate, disappearing from the universe and purportedly taking some quantum information with them. Quantum mechanics, however, is predicated upon the principle that information can never be lost.

Here was the conundrum. Once information falls into a black hole, it can't climb back out without traveling faster than light and violating relativity. Therefore, the only way to save it is to show that it never fell into the black hole in the first place. From the point of view of an accelerated observer who remains outside the black hole, that's not hard to do.

Thanks to relativistic effects, from his vantage point, the information stretches and slows as it approaches the black hole, then burns to scrambled ash in the heat of the Hawking radiation before it ever crosses the horizon. It's a different story, however, for the inertial, infalling observer, who plunges into the black hole, passing through the horizon without noticing any weird relativistic effects or Hawking radiation, courtesy of Einstein's equivalence principle.

Shattered Nerves: How Science Is Solving Modern Medicine's Most Perplexing Problem

For him, information better fall into the black hole, or relativity is in trouble. In other words, in order to uphold all the laws of physics, one copy of the bit of information has to remain outside the black hole while its clone falls inside. Oh, and one last thing—quantum mechanics forbids cloning. Leonard Susskind eventually solved the information paradox by insisting that we restrict our description of the world to either the region of spacetime outside the black hole's horizon or to the interior of the black hole. Either one is consistent—it's only when you talk about both that you violate the laws of physics.

This "horizon complementarity," as it became known, tells us that the inside and outside of the black hole are not part and parcel of a single universe. They are two universes, but not in the same breath. Horizon complementarity kept paradox at bay until last year, when the physics community was shaken up by a new conundrum more harrowing still— the so-called firewall paradox.

Here, our two observers find themselves with contradictory quantum descriptions of a single bit of information, but now the contradiction occurs while both observers are still outside the horizon, before the inertial observer falls in. That is, it occurs while they're still supposedly in the same universe. Physicists are beginning to think that the best solution to the firewall paradox may be to adopt "strong complementarity"—that is, to restrict our descriptions not merely to spacetime regions separated by horizons, but to the reference frames of individual observers, wherever they are.

As if each observer has his or her own universe.

Ordinary horizon complementarity had already undermined the possibility of a multiverse. If you violate physics by describing two regions separated by a horizon, imagine what happens when you describe infinite regions separated by infinite horizons! Now, strong complementarity is undermining the possibility of a single, shared universe. On glance, you'd think it would create its own kind of multiverse, but it doesn't.

Yes, there are multiple observers, and yes, any observer's universe is as good as any other. But if you want to stay on the right side of the laws of physics, you can only talk about one at a time. Which means, really, that only one exists at a time. It's cosmic solipsism. Sending the universe into early retirement is a pretty radical move, so it better buy us something pretty in the way of scientific advancement.

I think it does. For one, it might shed some light on the disconcerting low quadrupole coincidence—the fact that the cosmic microwave background radiation shows no temperature fluctuations at scales larger than 60 degrees on the sky, capping the size of space at precisely the size of our observable universe — as if reality abruptly stops at the edge of an observer's reference frame. More importantly, it could offer us a better conceptual grasp of quantum mechanics. Quantum mechanics defies understanding because it allows things to hover in superpositions of mutually exclusive states, like when a photon goes through this slit and that slit, or when a cat is simultaneously dead and alive.

It balks at our Boolean logic, it laughs at the law of the excluded middle. Worse, when we actually observe something, the superposition vanishes and a single reality miraculously unfurls. In light of the universe's retirement, this all looks slightly less miraculous. After all, superpositions are really superpositions of reference frames. In any single reference frame, an animal's vitals are well defined.

Cats are only alive and dead when you try to piece together multiple frames under the false assumption that they're all part of the same universe. Finally, the universe's retirement might offer some guidance as physicists push forward with the program of quantum gravity. For instance, if each observer has his or her own universe, then each observer has his or her own Hilbert space, his or her own cosmic horizon and his or her own version of holography, in which case what we need from a theory of quantum gravity is a set of consistency conditions that can relate what different observers can operationally measure.

Adjusting our intuitions and adapting to the strange truths uncovered by physics is never easy. But we may just have to come around to the notion that there's my universe, and there's your universe—but there's no such thing as the universe. Nature and nature's laws lay hid in night; God said "Let Newton be" and all was light. The breathtaking advance of scientific discovery has the unknown on the run. Not so long ago, the Creation was 8, years old and Heaven hovered a few thousand miles above our heads.

Materials: Contains pages. Dimensions LxWxH : 6 x 0. Weight: Shipping Weight: 1. Price Check. Additional Pricing Notes:. Victor D. Seller s :. Made By: Victor D.

What are the symptoms of nervous weakness ? - Health FAQ Channel

Chase Victor D. Chase is the author of Shattered Nerves: How Science Is Solving Modern Medicine's Most Perplexing Problem, a guide to functional electrical stimulation designed for therapists of individuals with a spinal cord injury.