Human Senses Verses Technology |By John Bickart, Ph.D. | Science Education and Spiritual Transformation
A while ago – nearly half a millennium ago, around the 1500s – people often spoke of qualities like color, smell, sound, taste, and feel. These qualities were given a place of high importance. Back then, scientific experimentation was largely concerned with observations of the world using the senses. Scientists (called philosophers at that time) watched nature’s wildlife, plants, and physical phenomena in their native environment more than interrogating nature with specific questions in mind. But since the influence of philosopher/scientists like Descartes, Galileo, and Bacon, human senses have become secondary in importance to qualities that can be measured. Now we often talk about measurable quantities.
Then, the role of experimentation after the 1500s started to change. We started to become a little less like friends with nature and a little more like owners. Some experiments began to ask nature to perform under circumstances that were designed by the interrogator to find out specific concepts. Increasingly, the experimenter was asking a pointed question and nature was limited to the answer to that question. In some ways, nature was treated a little like a pet that is made to sit and beg and do tricks on command. Experimentation became more head based and left brain oriented, rather than the ancient, heart based, right brain orientation. In other words, scientists looked more in a mechanical, analytical way, taking things apart and assuming that nature is made of machines. Of principal importance in this type of interrogation are measurable quantities that are usually assigned a number. Therefore, while the human sense qualities became secondary, qualities like magnitude or size became primary.
Integration is the Key
We are now integrating the ancient abilities to observe in balance with parts analysis. In other words, we are combining heart and head. What is an example of an interrogation that is pointed, analytical, and mechanical? Consider investigating a flower. The analytical scientist from the recent past would remove the flower from its native environment, stop it from growing, cut it apart, then surmise how it operates by examining the parts. This aggressive form of interrogation reduces the flower to an object. Objectifying animals, plants, and people in this period from the 1500s until now was quite common. I don’t know about you, but I don’t like being asked questions about my parts, only. I am a whole person – more than my parts. So, the science course of today needs to stress to future scientists and science teachers that we are not objects.
During the left brain time, there was a movement toward measurement which, of course favored technology over the human senses. A thermometer measures temperature better than human touch, and a ruler measures length better. So, the role of experimentation became increasingly about numbers and parts, objects and measurements, and technology and machinery. On the one hand, as we shifted toward technology, away from the human senses, many aspects of science improved. But on the other hand, one might ask, “Has anything been lost? Have we thrown out the baby with the bathwater?” This essay seeks to alert us to this shift in the role of experimentation. It recommends that we embrace the incredible new ways to use technology, without throwing away some important benefits to using our human senses.
Re-Integrating Our Senses
Our senses may not be accurate measurement devices, but they are the keys to personal growth and perhaps our greatest human power.
If you are using your senses to be an observer, you increase your presence. This form of personal growth always ends well. Are there any events in life that are not better if we give them more attention? And what do I mean by our greatest human power? Let me show you with an example. Have you ever been helped by a friend who just listened? They were there for you, wholly attending. Why does that work? Somehow, science will catch up with the mechanisms of this process, but meanwhile, we can recognize the power in such an exchange. Science is at the tip of an iceberg in finding that observation helps more than people. It helps flowers, animals, and even perhaps, according to quantum effects, all matter itself. We are just beginning to scientifically measure the effect we have on the world when we observe it – and this is a power.
A good science course, especially for future teachers, may want to note this historical shift toward technology – away from the human senses. Then, a good course would ask some questions such as the following.
- Why did this shift take place?
- What has become better since the shift?
- What may have been lessened since the shift?
Thinking Too Much versus the Right Amount
“The simple reason why the majority of scientists are not creative is not because they don’t know how to think;
but because they don’t know how to stop thinking.”
– Eckhart Tolle, The Power of Now (2011)
How does one not think? Try this experiment right now. Pay attention to one of your senses – look, listen, smell, taste, or feel something. Do it for about half a minute. Did you notice that you suspended analytical thought as you observed what your senses reported? Did you think of tomorrow or yesterday – of somewhere other than here? If you did, that was not part of your sensing. Observing/sensing and thinking/analyzing are separate activities. They occur very closely in time, much like FMRI research has shown about electrical and visceral activity across the two lobes of the brain. The interplay is vitally important. But the act of purely observing preempts and precedes the act of thinking about what has been observed. And it happily blocks out the preoccupation with worries or anticipations about tomorrow and the laments or sentimentality about yesterday. That is why some people meditate.
Observation is the key to not thinking. And the human senses are the gateway to observation.
So, what have we learned so far? There are two things a student of science should do to move the Role of Experimentation forward while still moving our Friendship with Nature forward. One is to keep a balance between human senses and technology. A second is to use those human senses as a gateway to observation. Let’s look now at a third way – language – to establish a balance between head and heart.
The role of language is a two-edged sword. If kept in balance, it is a powerful tool for society, but the two edges of this sword could end in a duality that takes us apart. We can use words to enlighten ourselves and become friends with nature, or we can get lost in a description as a representation of the genuine, then substitute that description for relationship.
The dawning of language was different from everyday use of language. At first, we were in relationship with the world. Words gave us a way to name that with which we had come to know – our new ‘friends’. But soon after that, we used words as representations of actual phenomena. We moved a primary relationship to a secondary one. The word took the place of the thing itself. Whereas the word “mother” was an all-encompassing experience of joy, it could be reduced to a cry when we get hurt or hungry.
This is very like the movement of the role of experimentation. We move from a primary relationship with nature through our human senses to a removed, representational language of names and numbers from our measurements. It is like the difference between being live and in person on a date versus having a correspondence through technology.
The trick is to continue measuring and speaking in our language, without losing actual contact – actual relationship – with the world around us.
Do you remember when you first learned language? Almost no one can. How about humankind’s first language? We can only make conjectures about the dawning of language for humankind. The first 32 symbols have been uncovered on 370 cave walls across the globe that were written roughly 30,000 years ago. Henri Bortoft investigates this conundrum by looking through Helen Keller’s eyes as she first experiences language.
“It is language which teaches us concepts as children, and hence it is language which first gives us the ability to see the world, so that the world can appear. But our first experience of language, the dawning of language, is different from our experience of language as adults. A vivid illustration of the original disclosive power of language—as distinct from the secondary representational function of language, as when it is used for conveying information—is given by the remarkable story of Helen Keller. As a very young girl, Helen Keller had a severe attack of measles, which left her deaf and blind. This happened to her before the dawning of language, and it was only due to the extraordinary work of her dedicated governess that these extreme difficulties were eventually overcome. The moment when this finally happened is described in her own words:
‘We walked down the path to the well-house, attracted by the fragrance of the honeysuckle with which it was covered. Someone was drawing water and my teacher placed my hand under the spout. As the cool stream gushed over one hand she spelled into the other the word “water” first slowly, then rapidly. I stood still, my whole attention fixed upon the motion of her fingers. Suddenly I felt a misty consciousness as of something forgotten—a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that “w-a-t-e-r” meant the wonderful cool something that was flowing over my hand. That living word awakened my soul, gave it light, joy, set it free!. . . I left the well-house eager to learn.
Everything had a name, and each name gave birth to a new thought. As we returned to the house each object that I touched seemed to quiver with life. That was because I saw everything with the strange new light that had come to me.’ (Helen Keller, The Story of My Life)
She is blind but describes herself as seeing with a new light. The word “water” does not represent or stand for water here; it is not a label to be attached to water for the purpose of communicating information. Helen Keller does not already know water, to which she then adds the word. No, in this case everything is reversed. The word “water” shows her water; it brings it to light so that she sees it.” (Bortoft, 1996)
Language sets humankind apart from the animals. Language is not just the skill of communicating information. It gives us the unique ability to be conscious of being in the world, but not of the world.
“Without language no things could be, and therefore there would be no world. So the dawning of language is the dawn of the world—as we can see so clearly here in the experience of Helen Keller. This sets her soul free because to be human is to live in the world. Only human beings have a “world”—which is entirely different from inhabiting an environment in the way that animals do. Until this experience of the dawning of language, Helen Keller had been unable to be in the world, which is proper to human beings, and had inhabited a wordless environment. A human being not able to be human—and now she is freed from the darkness of this condition to enter the light of the human world.” (Bortoft, 1996)
There are great advantages to the act of experimentation and discovery. The language of science can be like a light coming into a dark room, shining on all manner of treasures. The trick is to stay in that room and appreciate the treasures – not just visit it quickly, then use language to talk endlessly about it. Our goal for the future of science is to get back to this room of ours. This incredible world is a room full of riches. It is a garden of immeasurable variety. We must get back to the garden. We must learn to investigate her while simultaneously regaining friendship with her.
#92 The First Great Play
At first, there was a great play named “The Lila”. It was created by Gods and acted out the story of how the Earth grew for as long as their stories remembered. You might say that if you saw The Lila that you were privy to the story of stories. It portrayed how the rocks and plants and animals and of course, the peoples of the Earth came to be.
Finally, the great play was ready to be presented. It was decided it would be acted out in two cities, once on the near side of the mountain, and once on the far side. Audiences were invited with the proviso that they give The Lila their utmost respect and that they attend with particularly keen observation. You see, the actors in the great play were the actual elemental beings of earth, water, air and fire. So, if the audiences did not receive them well, the actors might retreat. Then, they would not be able to perform their role in running nature, herself.
In the near city, members of the audience were scientifically curious to learn how to control the elements. They asked many questions of the actors to try to understand just how the rocks and plants worked, so that they could have the power to master them. In their zeal, they tested the actors rigorously, taking every bit of the elements’ stories apart. They thought that their curiosity would be a compliment to the great play, but the actors left the near city feeling tortured and tired.
In the far city, the audience fell silent in awe of the beauty and majesty of The Lila. They marveled at the relationships between rocks and plants and animals. They let the natural elements speak for themselves. They felt profound gratitude for the role that people were given in the great play. So deep was their observation of the play, that they found themselves watching parts of it over and over in their minds.
The following years found the near city bringing storms and blight upon itself, while the far city dwelled in peace.
By DEB SCHEIN | Growing Wonder
In late March of this year, March 28, 2023, I happened to read an article from The New York Times titled: What Happens When an Artist Loses His Sight. It was written by Roger Rosenblatt, a writer and contributor to both Time magazine, PBS “NewsHour”, and author of a new book titled, “Cataract Blues.”. What caught my attention was Rosenblatt’s attention to the unseen forces in the universe and his implying, but never using, the word spirituality.
He writes that when he began to lose his eye sight, he started to reflect upon all “the invisible forces that govern our lives (such as) gravity, electric currents, magnetic fields and also love, grief, morality, faith and creativity.” Rosenblatt goes on to include, “The presence and power of invisible things and of a secret music — of the spheres and of ourselves.” Rosenblatt’s reflections reminded me of Lisa Miller’s book, “The Awakened Brain” (2021). In her book, she shares that when we are healthy, when we flow with the universe, we also vibrate at the same frequency as the universe.
I thank Rosenblatt for highlighting these hidden, invisible qualities of life and earth. Yet, it saddens me that he did not use the word spirituality. Unfortunately, it appears that we are still unable to even talk about the word spirituality or use it even when it so applicably belongs. Indirectly, Rosenblatt also captures how we are both naturally and innately spiritual; yet how we also require some external, connecting forces to continue our spiritual journeys. He does this all without even mentioning the words that tie his thoughts to that missing, invisible word – spirituality.
I would therefore like to share some thoughts of my own work in spirituality. I began looking at spiritual development of young children. My lens has grown and changed. I am currently writing about spiritual flourishing. Unlike development, flourishing requires connection, belonging, and relationships. Yes, spiritual flourishing is invisible and yet, it can be a very powerful force in helping us humans to be more optimistic, better able to make good choices for ourselves and the world. Research is showing that spiritual flourishing can lead to resiliency and empathy so needed today (Miller, 2021). The word spirituality, for me, reflects our human ability to wonder, to seek out moments of awe, and joy. It appears that we need relationships, community, love, and nature in order to achieve this invisible accomplishment of spiritual flourishing. I believe it should be integrated into our school curriculum, our child rearing practices, our ways of thinking of ourselves as spiritual humans. It should no long be a hidden, unspeakable, invisible force but a desired concept that reflects who we are from a universal perspective.
Shame and shunning make mental illness worse. But new studies suggest that attitudes are changing for the better—and that’s largely due to young people.
Today, people in the United States know far more about mental illness than did previous generations. They might know what it looks like: changes in emotions, thinking, or behavior that make function in daily life difficult, if not impossible. They’re much more likely to understand that most of us will experience some form of mental illness in our lifetimes, like depression or anxiety. And they know that smaller numbers of people will experience more severe conditions like bipolar disorder, schizophrenia, or PTSD.
Despite this progress, for decades attitudes toward people with mental disorders have hardly budged. How do we know this? One of the crucial ways we measure prejudice is to ask about “social distance.” In this case, that involves asking: How close would you be willing live to someone with a mental illness? Would you live in the same state? Be in the same classroom or workplace? Participate together on a project? Ride next to them on public transportation? Go out with them? Let your offspring marry them?
When friends, family, and society shame people for their illness, and shun them, that’s stigma. This shaming can take many forms, from stereotypes (“they’re dangerous”) to moral judgments (“you’re just a coward”) to dismissive labeling (“you’re crazy”). There can be real consequences of stigma, such as lost job opportunities and social marginalization, as well as giving up on seeking treatment. Overt discrimination is a big part of stigma, too: People with mental disorders, in many states, cannot run for office, serve on a jury, keep a driver’s license, or retain child custody. Most perniciously, the stigma of mental illness can lead people to hide their troubles and refuse to get help—which is likely to worsen their condition and create a vicious cycle.
Until very recently, studies consistently showed that the desire for social distance from people with mental illness had not improved over the past 50 to 60 years. In fact, in some ways it had actually worsened, as more people than before automatically linked mental illness with aggression and violence.
At the same time, studies also showed that people had greater knowledge of ADHD, depression, bipolar disorder, PTSD, and more—but just “knowing” more facts about mental illness can actually make things worse. For example, if you learn that people with schizophrenia may hear voices and become paranoid, you might consider that to be quite frightening, even threatening. Similarly, understanding that people with severe depression may come to feel that their lives are not worth living—and may therefore consider suicide—can trigger the belief that such individuals are utterly self-centered. What might not be understood is that severe depression can foster the belief, in people affected, that everyone else would be better off without them.
In other words, factual knowledge about mental disorders, alone, can actually fuel stereotypes. In addressing stigma, the missing piece isn’t knowledge—it’s contact, empathy, and humanization.
A recent study published in December by the JAMA Network Open suggests that things may finally be starting to change. But the picture is complicated: Some kinds of illness are becoming less stigmatized, true, but people still want to keep distance from other forms. The good news is that young people are much less likely to stigmatize mental illness than older generations—and that there are specific steps we can take, as individuals and society, to keep making progress.
Generational shifts driving acceptance
In surveying a representative group of U.S. adults during a period of over two decades, sociologist Bernice A. Pescosolido and her colleagues found a significant and important decrease in desire for social distance related to depression over the past few years.
That is unprecedented, and of real importance. However, in the same paper, the researchers found that attitudes related to conditions like schizophrenia and substance-use disorders did not show signs of improvement—and had actually worsened.
Even though the participants in this study were many—over 4,000 adults—it would take even larger groups to understand how socioeconomic, ethnic, or racial characteristics affected changing attitudes toward mental illness. Still, from this study and a number of others, it does appear that improvements are driven mainly by younger people.
In fact, research hints at a massive generational shift in how mental illness is perceived and socially experienced. Multiple other surveys and studies besides the one by Pescosolido and her colleagues suggest that both millennials (those born from the early ’80s to the mid-’90s) and Generation Z (who were mostly born in the 21st century) are much more accepting and knowledgeable about mental illness than previous generations.
Why? Rates of diagnosed mental illness have been rising among young people. For example, one 2019 study found almost half experience depression, peaking at 60% for teens aged 14–17—considerably more than previous generations. More recent work conducted during the COVID-19 pandemic hints at a profound mental health crisis.
When the CDC surveyed almost 8,000 high school students in the first six months of 2021, researchers found that depression, anxiety, and other disorders permeated the lives of adolescents during the pandemic. All groups reported more persistent sadness since spring 2020, though the rate rose faster among white teens than others. Nearly half of lesbian, gay, bisexual, and transgender teens reported seriously thinking about suicide, compared with 14% of heterosexual peers. One in four girls did so, twice the rate of boys.
Did that translate into higher suicide rates? Yes, and decidedly so, especially for girls. Some emergency departments have reported a significant increase in teens coming in for suicide attempts. (Note that these numbers are only provisional and could go up with time.)
What’s responsible for these negative trends? That’s a topic hotly debated by scholars, with most suggesting some combination of factors like the pandemic, climate change, political and economic instability, increased educational competition, and technological changes like phones and social media. Even more, for teenage girls in particular, a toxic “triple bind” of impossible expectations (be supportive and nurturing, be super competitive, and do both of the above effortlessly while looking “hot”) plays a key role.
However, as depression and anxiety spread among young people, it does seem as though these conditions are becoming normalized—and that youth are becoming more open and compassionate with one another. And high school clubs, as well as college programs, that focus on reducing stigma with respect to mental disorders have been shown to create real benefits.
All evidence to date suggests that many kinds of mental illness carry less stigma for younger generations. As these young people attain full maturity, the tide could eventually turn even for disorders like schizophrenia—the way it has, convincingly, for issues like same-sex marriage over the past 20 years. There are steps we can take to keep pushing this process forward.
What can create more positive change?
First, from a “top-down” perspective, enforcement of anti-discrimination policies, including the Americans with Disabilities Act, can help to drive acceptance. Title I of the ADA blocks employers from discriminating against people with disabilities, including mental illness, and requires them to make reasonable accommodations. Last week, a man in Kentucky won a half-a-million-dollar judgment against the employer who fired him for having a panic attack at work, which will surely discourage other companies from doing the same.
Beyond employment protection, we need enforcement of laws mandating “parity” for coverage of mental and physical disorders, and there’s much work to do with police and the courts to make a distinction between criminal activity and mental health crises.
Such steps can limit the consequences of stigma, but they can’t erase its existence. Though we’ve learned that information all by itself doesn’t reduce stigma, that doesn’t mean we should stop educating people from early ages about diagnosis and treatment—and there is evidence to suggest public health campaigns can reduce stigma if properly funded and executed.
For example, surveys conducted two years after Scotland’s multiyear, multiplatform “See Me” campaign—which aimed to normalize mental illness—showed a 17% drop in fear of people with serious mental illness, among other good outcomes. A much briefer social media campaign in Canada called “In One Voice” resulted in a “small but significant” decrease in a desire for social distance one year after it ended—though the same study also found that people didn’t feel more motivated to actually help someone in a mental health crisis.
The contrasting results of these two campaigns suggest that size and scope matter when it comes to changing attitudes. Scotland’s much more comprehensive approach made more of an impact than “In One Voice.” And it emphasized personal contact, not just factual knowledge, asking us to “see” real people in all their complexity.
The California Mental Health Services Act is a statewide prevention and early intervention program directly addressing stigma and discrimination, including “a major social marketing campaign; creation of websites, toolkits, and other informational resources; an effort to improve media portrayals of mental illness; and thousands of in-person educational trainings and presentations occurring in all regions of the state.” An independent evaluation found that it succeeded in reducing stigma in California, “with more people reporting a willingness to socialize with, live next door to, and work with people experiencing mental illness.” Participants also reported “providing greater social support to those with mental illness.”
Policies and education do work to reduce stigma, but they alone cannot change human hearts.
It has probably helped a lot for more and more people to talk about their experiences with mental illness, on social media and through popular media like magazines and television. In 2013, the New York City chapter of the National Alliance on Mental Illness teamed up with marketing company JWT New York to launch the “I Will Listen” campaign. They asked people to publicly pledge on social media to hear and support individuals struggling with mental illness.
That early effort encouraged others to later speak out about their experience with depression and addiction on platforms like TikTok and Facebook, making private struggles public in a way that previous generations only glimpsed with books like William Styron’s groundbreaking 1990 memoir Darkness Visible. Or, more recently, books like Kay Redfield Jamison’s memoir An Unquiet Mind (1996), Andrew Solomon’s The Noonday Demon (2001), and Brian Broome’s Punch Me Up to the Gods (2021).
It’s important to note that there is little solid evidence to date that talking about mental illness on social media reduces stigma—and, in fact, at least one study found that social media (if it promotes stereotypes) can actually increase stigma. That doesn’t mean people shouldn’t try. It could simply mean that it isn’t enough for people to talk about their own experiences with mental illness; we might also need concerted efforts to limit hate speech and misinformation on social media about people with mental illness. And that personal disclosures of mental disorder need to be grounded in rehearsal, support, and timing, as is the case with stigma expert Pat Corrigan’s program, Honest, Open, and Proud.
Beyond social media, news and entertainment media have a long way to go in representations of mental illness. Many studies through the years have shown that stigmatizing portrayals result in more social stigma and can make suffering much worse in people suffering from mental illnesses. Although more accurate and humanized accounts do appear, the predominant themes are ones of incompetence and violence. We simply need better, more accurate, and more humanized media portrayals—and perhaps that needs to start with targeting journalists and other content creators with specialized education in college, graduate school, and professional development courses.
As well, better access to evidence-based treatments is a huge priority for the entire mental health profession. We now understand that many forms of psychotherapy and family-based treatment, as well as medications when needed, can combat some of the most serious symptoms and impairments related to mental disorders. But distressingly low proportions of those in need of such care actually receive evidence-based treatments. For many, even just regular therapy is financially out of reach. At an overall per-capita level, funding for mental health research, via the National Institute of Mental Health, remains far lower than for conditions like cancer.
That is quite ironic. Several generations ago, cancer was highly stigmatized as a disease triggered by one’s loss of will to live. Indeed, if your relative died from cancer, you would instead put in the obituary that she passed away from an unknown illness. Today, though—given the huge spike in disclosure and acceptance—cancer has become a true cause, engendering support and large economic outlays in the battle against it. Understanding that treatment can be effective might help reduce stigma of mental illness, if we can grow to see it as just another human problem that medicine can address, given the time and tools.
Finally, as noted above, young people appear, in many surveys, to be the drivers of changed attitudes and behaviors. A devastating kind of stigma is self-stigma—and the evidence indicates that millennials and Gen Z are turning away from seeing themselves as broken for feeling depressed and anxious, toward seeing themselves as having common illnesses that can be managed and even overcome with treatment, group support, and solidarity.
Young people are the key. Not just because they are always the ones who will shape the future, but because today’s youth are facing formidable mental health challenges. If we can support their mental health through these waves of stressful social change, they might have the compassion and the wisdom to alleviate the suffering of those with mental illness, instead of making it worse with stigma.