Esther Dyson’s Interactive Keynote “Humans Controlling AI" for 361Firm’s New York Conference on March 20, 2024

Tech industry investor and thought leader Esther Dyson returns to kick off the March conference with an interactive discussion on Artificial Intelligence.

As Esther discussed her topic and format, she sent the emails below, with her recent articles including one recently published in The Information (immediately below). As you will see, Esther believes that society and investors are missing key points. She wants to put the emphasis back on the need to train and support humans.

We will kick off the discussion with Esther and open it up for Q&A.

“Humans Controlling AI"

---------- Forwarded message ---------
From: Esther Dyson
Subject: As promised - originally published in TheInformation.com
To: Mark Sanor <msanor@361firm.com>

Don’t Fuss About Training Our AIs. Train Our Babies!

People worried about AI taking their jobs are competing with a myth. Instead, people should train themselves to be better humans.

By  Esther Dyson

Jan 24, 2024, 9:08am PST

Humanity is waking up to the challenges and opportunities of artificial intelligence, but we don’t yet understand our role. People talk about unexplainable AI when they should be more concerned about the unexplainable humans running the companies that develop the AI. (Hiya, Sam!) People worried about AI taking their jobs and taking control are competing with a myth. Instead, people should train themselves to be better humans even as they develop better AI. People are still in control, but they need to use that control wisely, ethically and carefully.

The first step is to understand the fundamental difference between humans and AIs. We are analog, chemical beings, with emotions and feelings. Compared with machines, we think slowly—and we act too fast, failing to consider the long-term consequences of our behavior (which AI can help predict). So we should not compete with AI; we should use it. At the same time, we should become better humans: more self-aware and more understanding of the world around us, better able to understand our own and others’ motivations. We should know enough to manipulate ourselves and to resist manipulation by others.

The Takeaway

We should automate routine tasks and use the money and time saved to allow humans to do more meaningful work, especially helping parents raise healthier, more engaged children.

This is the solution to the problem of AI “stealing” jobs or evading our control: If we manage AI correctly, we can automate routine tasks and use the money and time saved to allow humans to do more meaningful work. That work starts with training other humans: kids learning from well-paid, engaged caregivers; patients talking with real doctors and nurses, not just bots and machines; students learning not just to remember facts but to ask provocative questions; teenagers interacting with human mentors instead of influencers “trained” by algorithms.

The important training is not STEM, coding or how AI works. (The market will take care of that.)  It’s about how peoplework—and how businesses so often make money by manipulating people to buy things they might not need. Instead, people can learn how to manipulate themselves.

Rather than compete with computers, people need to learn how to use and manipulate computer- and AI-driven systems.

Training is recursive. People train people who train people who train still others. Think of three types of roles:

  • Meta trainers: They learn—and teach—a formal curriculum and are usually paid to do so. They train police, teachers, managers, coaches, healthcare workers (especially mental health), doulas, religious leaders and counselors. This is where academies and institutions can create scale. 

  • Front-line trainers: They are mostly paid to do something other than training, but they also de facto train people how to behave, think and be human. They are police, teachers, coaches, healthcare workers, religious leaders, counselors and, most important, parents. These front-line humans encourage individuals to understand themselves and others, and they pay personal attention to those they train. 

  • Trainees: They are children, lovers, friends, neighbors and customers, interacting in clubs and mingling at parties. This is where people learn and also informally teach others—friends, family and colleagues who support you and who come to you for support. 

In the long run, everyone should become a front-line trainer of the next generation within their own families. Many should also be employed as trainers and meta trainers, passing on formal knowledge and insights to the next generation of trainers. (AI can handle the routine and impersonal stuff.)

The Front-Line Trainers

Front-line trainers are crucial to raising healthy, resilient, curious children who will grow into adults capable of loving others and overcoming challenges. There’s no formal curriculum for front-line trainers. Rather, it’s about training kids—and the parents who raise them—to do two fundamental things.

First, ensure that they develop the emotional security to think long term rather than grasp at short-term solutions through drugs, food, social media, gambling or other harmful palliatives. (Perhaps the best working definition of addiction is “doing something now for short-term relief that you know you will regret later.”)

Second, kids need to understand themselves and understand the motivations of the people, institutions and social media they interact with. That’s how to combat fake news—or the distrust of real news. It is less about traditional media literacy and more about understanding: “Why am I seeing this news? Are they trying to get me angry—or just using me to sell ads?”

Unfortunately, many children today are exposed to bad training as the result of having divorced or missing parents or experiencing abuse, hunger, exposure to addiction, mental illness, racism or bullying. These children complete less school, commit more crime and suffer from more instances of addiction, obesity and poor health than their peers with loving relatives and helpful neighbors. Affected children then often pass these vulnerabilities to those around them, including their own children when they become adults. Everyone suffers (including future taxpayers).

Expecting and new parents are the ideal place to begin such training. They are generally eager for help and guidance, which used to come from their own parents and relatives, from schools and from religious leaders. Now such guidance is scarce.

Proving the Impact

One good example of such training–with data to support its long-term impact–is Nurse-Family Partnership, a community health program in more than 40 states for  first-time moms and their children affected by social and economic inequality and other risk factors. Each mother is paired with a registered nurse early in pregnancy and receives ongoing nurse visits through the child’s second birthday.

Mothers enrolled in Nurse-Family Partnership get care and support for a healthy pregnancy. At the same time, families develop a close relationship with the nurse, whom they can rely on for advice on everything from caring for their child to how  to provide a stable, secure future for their new family. NFP serves nearly 55,000 families per year in the U.S.

NFP generates significant long-term benefits. Ted Miller of the Pacific Institute for Research and Evaluation, a nonprofit think tank, found that every dollar spent in NFP yields $6.50 in benefits over 18 years through lower healthcare costs, improved health, educational attainment and job performance, as well as reduced crime and welfare payments. The benefits grow over time as the healthier babies become better parents of the next generation.

In another promising development, more states are covering doula services under Medicaid waivers. Doulas—trained and certified social practitioners who assist with pregnancy and newborns—are more cost-effective than nurses and focus more on social and emotional support. Walmart, the country’s largest private employer, recently expanded its employee healthcare coverage to include doulas.

How to Pay for It

In a perfect world, we would increase taxes on our newly efficient corporations—with their smaller but more productive AI-fueled workforces—and use the money to subsidize the pay of trainers and support people caring for others. In the long run, this would create a healthier society, lower the costs of healthcare and reduce many social ills.

But that’s not likely right now.  Here are some more realistic steps forward.

Lean on the private sector’s training efforts, which focus on training the meta trainers rather than the front-line ones. Many employers use specialized training companies to select and train new employees. Generally, the training companies are paid only when the trainees are hired, creating an incentive to find promising candidates and to train them well. CareAcademy, for example, trains professional home health caregivers in both medical and people skills.

Use AI to demonstrate the value of training (a good way to use AI!). As employers and health insurance plans seek solutions to problems such as burnout and loneliness, startups are offering various kinds of support and counseling services; both customers and vendors are getting serious about measuring their impact. AI can help.

Stop talking about spending on healthcare and other social support. We need to think about investing in people the same way we think about investing in other public infrastructure. Our human capital is an asset, not a drain on the public purse. Businesses routinely take on debt to improve their standing; so should government (and not just for bridges and special industrial programs).

This may be too optimistic. But it is how we can make the best use of the power and efficiency AI represents and protect ourselves against its misuse. Don’t leave it to the corporations and politicians with agendas they don’t want to explain. And remember, it may be more thrilling to drive a Tesla (or to read about Elon Musk’s drug-addled hallucinations), but nothing beats walking your grandkid home from school. That’s your future, not a shiny mirage.

Have an opinion The Information should publish? Find out how to reach us and more here.

Esther Dyson is a longtime tech industry analyst and author of “Release 2:0: A Design for Living in the Digital Age."


---------- Forwarded message ---------
From: Esther Dyson
Subject: #2. A broader take on AI
To: Mark Sanor <msanor@361firm.com>

Some potential  prompts  and another article 

What’s the difference between an LLM and an actual world model that “understands” things?   Blah blah.  

But for starters, two wonderful quotes:  

"Words are not in themselves carriers of meaning, but merely pointers to shared understanding.” - David Waltz, RIP, in “Daedalus,” 1988.   (PS - one challenge w AI is all the stuff it does not index, including Daedalus.) 

“The future of search is verbs.” - Bill Gates, private dinner  somewhere around 2010 

How can we deal with the problems of bias?  Complicated! 

What are the implications for intellectual property?  Huge!

What are interesting investment opportunities?  Find good management, because the rest is often replaceable by AI :) 

Focus on the long-term welfare of people and society: Ask not what AI can do but what WE can ask it to do

“The question of the future of humans and AI seems impossible to answer because of unexplainable humans, not because of unexplainable AI. So much depends on our use and control of AI. And that in turn depends on who ‘’our/we"  is. There are a number of issues here:. Machines gave us huge gains in our ability to produce and eventually to transport  things, including food. That in turn gave us too many choices, which often overwhelms us (see Barry Schwartz’s brilliant book ”Paradox of Choice”). While poor people often lack the money/security to make good choices, rich people lack the time to enjoy/make use of all their options (as described in Eldar Shafir and Sendhil Mullainathan’s equally brilliant book ”Scarcity”). We have now gotten used to accelerated but overfilled time. Both then and now, you could lose your life in a few seconds, but in the past there were very few instant solutions for any problem.

“We now live in a world of pills and instant shopping and even instant companions – found on dating apps (some real, some duplicitous) and also on many mental-health support apps. We expect immediate relief of our cravings. But instead, our cravings never go away; rather, they turn into addictions. 

Indeed, what makes us most human may be how we perceive our own time and that of others. That was the fundamental gulf between the protagonist of the movie ‘Her’ (played by actor Joaquin Phoenix) and his AI ‘lover’ Samantha (Scarlett Johansson); she had more than a thousand lovers and time to pay attention to each of them. But in the end, what we’re seeking is share of mind from other humans, not fungible minutes of attention. 

“Instead of regulating AI, we need to regulate its impact, and AI can actually be very helpful at that – both at predicting outcomes and at assessing counterfactuals. That’s what it does much of the time, whether in health care, advertising or political campaigns. It can also automate huge amounts of physical labor and routine decision-making or repetitive work. However, it’s up to humans to figure out what the goals of those AI tools and algorithms should be: How much to maximize sales versus reduce/simplify working hours? How much to maximize profits for the next year, versus for the current CEO’s tenure, versus on behalf of the investors who trade on the basis of a quarter’s earnings? Things were very different when entrepreneurs built businesses for their grandchildren to inherit. 

“Or is ‘we’ actually really people like Vladimir Putin and Donald Trump and Elon Musk – caught up in their own visions of a grandiose future (whether based on an imperial past or a future interstellar civilization)? They measure success differently, and they try to spread that vision whatever way they can. Mostly, they first seduce people with visions of power and money – and then make them complicit through the compromises required  to realize those visions. Some make those compromises knowingly, but most are swept along, unexplainable even to themselves. 

“AI will inevitably do a lot of useful things. I’d rather have an AI than a hungry, grumpy judge sit on my case in court. And, as a nondriver with no illusions about how safely I (and presumably most sensible people like me) drive, I’d rather sit in a car driven by a predictable AI that does not chat with the passengers, try to drink coffee, look at TikTok during stoplights or speed through yellow lights. Those points make sense and are only slightly controversial.

“To take a less abstract look,  let’s use healthcare as an illuminating example. We can take healthcare as a model for pretty much everything, but with extremes. It’s a business, even though for some people – especially at the beginning of their careers – it’s also a calling. Indeed, it’s a very messy, complicated business. Its  people - leaders and workers and customers - are overwhelmed with paperwork, with details, with conflicting regulations and requirements and stiff record-keeping protocols . And, and, of course, they must deal wit privacy requirements that complicate the record-keeping and also serve to maintain silos for the incumbents. AI can help handle much of that. AI will take care of the paperwork, and it can make a lot of good, routine decisions – clearly and cleanly and with explanations. It’s very good at routine operations and at making decisions on the basis of statistics and evidence – as long as it’s prompted with the right goals and using the right data.

 

“Getting the right goals and using the right data are, of course, the big challenges. Is society really ready to consider the future consequences of its actions, not just a year from now, and not just a century from now, but in the foreseeable future? Think of the people today whose predictable diabetes we do not prevent this year and next; those people will eventually require expensive treatment and find their lives disrupted well before 2040. (See the recent frightening stats on diabetic amputations.) What about the kids who now spend their days in some sort of child storage because parents can’t afford or find childcare? They are likely to drop out of school, get into drugs and lose their way, and scramble as adults to make money however they can in 2040 and beyond. 

“Then there are the mothers today who get inadequate pre- and post-natal care and counseling. They may suffer a miscarriage or fail to provide a nurturing childhood, with all the inevitable consequences by 2040.

“We need AI to predict the positive counterfactuals of changing our approach to fostering and investing in health in advance, versus spending too late on remedial care. If we use the right data and make the right decisions, for each patient specifically, AI will allow us to do one broad, important thing right: It will reduce busywork and free those who joined healthcare as a calling to be better humans – paying human attention to each of the individuals they serve. Our challenge – in healthcare as elsewhere – is to train humans to be human. Training AIs is scalable: Train one and you can replicate it easily. But humans must be trained one by one. Yes, they learn well in groups, but only if they are recognized as individuals by other individuals. 

“There are mostly positive and mostly negative scenarios for the near future. Both will happen across different societies and, of course, they will interact and intersect. There will be stark differences across countries and across boundaries of class and culture within countries.  I doubt that one side or the other will win out entirely, but we can collaborate to help spread the good scenarios as widely as possible. We’ll still be asking the same question in 2040: ‘How will it turn out?’ It won’t be over.

“As a society, we need to use the time we spend on rote decision-making and rule-following – which AIs can do well – to free ourselves and train ourselves to be better humans. We need to ask questions and understand the answers. We need to be aware of others’ motivations – especially those of the AI-powered, business-model-driven businesses (and their employees) that we interact with every day. 

“In the positive parts of the planet, AI – in its ethical form – will win out and we’ll start focusing not so much on what AI can do, but on what we ask it to do. Do predatory business models reign supreme, or do we focus more on the long-term welfare of our people and our society? In short, we need explainability of the goals and the outcomes more than we need an understanding of the technological underpinnings.

“And we need to understand our own motivations and vulnerabilities. We need to understand the long-term consequences of everyone’s behavior. We need the sense of agency and security that you get not from doing everything right, but from learning by making, acknowledging and fixing mistakes. We need to undergo stress and get stronger through recovery. What makes us special in some ways is our imperfections: the mistakes we make, the things we strive for and the things we learn.”

More re David Waltz:

The winter 1988 issue of "Daedalus" on Artificial Intelligence. My favorite article in here is Danny Hillis’, on the evolution of language. My favorite quote is from David Waltz: "...words are not...carriers of complete meanings, but are instead more like index terms or cues that a speaker uses to induce a listener to extract shared memories and knowledge. The degree of detail and number of units needed to express the speaker's knowledge and intent and the hearer’s understanding are vastly greater than the number of words used to communicate." This explains precisely the difficulty of knowing another culture; it’s not enough to know the rules and definitions, but you must also know the context that everyone around you shares. Language has syntax and semantics, but for true understanding you also need situation. (See page 31.)

 

Esther Dyson

Founder, Wellville, www.wellville.net

@edyson, Always make new mistakes!