Patients not only have to be put in the center of care, but also in the center of health technology. In a more general manner that’s what digital humanism is about.
The Medical Futurist 13 July 2019Instead of technological development serving the interests of big tech companies to the detriment of people by exploiting human weaknesses and by taking control out of their hands, humans should step up and say stop to technology degrading humans, creating or widening gaps in societies, disregarding diversity. Here are some ideas and principles about how that, namely digital humanism, could unfold in healthcare.
|Technology vs. humans: who has control?|
Google, Facebook, Twitter, Amazon, YouTube, and co.: for years we have been hearing about how they reshape the world, but the harmful effects of tech companies of the ‘attention economy’ are still not taken seriously enough. Although there are plenty of signs of how harmful they can be for societies.
The system is failing, that’s what the founder of the World Wide Web, Tim Berners-Lee, said two years ago. He emphasized that
“…while digitalization opens unprecedented opportunities, it also raises serious concerns: the monopolization of the Web, the rise of extremist opinions and behavior orchestrated by social media, the formation of filter bubbles and echo chambers as islands of disjoint truths, the loss of privacy, and the spread of digital surveillance.”
Remember the news about Facebook becoming a central tool for spreading propaganda against the Rohingya in Myanmar – even the UN blamed the platform for it – or stepping up support for Rodrigo Duterte in the Philippines? Or when the Russians tried to use social media platforms populating them with fake news in the 2016 US elections to divert its outcome?
However, we don’t have to go this far, just look at the individual level.
Online companies are building technology that’s addictive and attempts to catch users’ attention at any price – even by exploiting human weaknesses and harnessing our worst psychological flaws.
According to statistics, the average person looks at their phones 150 times a day. Every time we get a notification, the brain releases a flood of dopamine as addictive as the reward signal a gambler gets as they pull the handle on a slot machine. Likes? Shares? New e-mail? It doesn’t matter until it keeps you engaged. Are you inclined to read news tailored to your interests for a longer time? They create your very own filter bubble and you don’t have to confront conflicting opinions ever again. Do you feel more excited when watching conspiracy theories? They put content in front of you about the wildest ideas out there.
|Humane: A New Agenda for Tech from Center for Humane Technology on Vimeo.|
In his excellent lecture above, Tristan Harris, former design ethicist at Google and co-founder of the Center for Humane Technology, mentioned that from over 1 billion hours of YouTube watched daily, 70 percent are from the recommendations. It’s not users actively seeking out videos about, say flat earth conspiracies, but YouTube puts them in front of watchers as their view results in longer engagement on the site. And users diligently click on the next video, join the recommended Facebook group or allow notifications from dozens of sites. Now, who has control over our attention?
|Artificial intelligence, smart algorithms, and power|
You may say that that’s just one tiny fracture of technological development, we have advanced robotics, virtual reality, augmented reality, artificial intelligence, and the way social media is shaped doesn’t have such an influence. However, if you look at smart algorithms as control mechanisms in robots, or VR/AR embodying an immersive or interactive environment to intelligent software programs,
…you’ll find that the ultimate technology that we have to worry about is smart algorithms and artificial intelligence. Algorithms harnessing the power of big data, the achievements of computer vision and natural language processing – and the capabilities to take over human agency.
No wonder that concerns about A.I. development not only revolves around smart algorithm-based automation replacing jobs but also how it may overtake the human race in the cockpit.
Stephen Hawking even said the development of full artificial intelligence could spell the end of the human race. Elon Musk agreed. Tristan Harris said artificial social systems coupled with overwhelming A.I. and extractive incentives coming from the biggest technology companies are resulting in the exploitation of human weaknesses and the downgrading of humans. While we are upgrading our technologies, our machines, we are hijacking our minds and creating zombies addicted to screens.
That sounds horrible and we should not let it happen, but what can we do? What kind of solutions are at our disposal to go against the interests of technology companies and the inherent biases of their products?
|The solution is about digital humanism|
Michael Stampfer, Managing Director of the Vienna Science and Technology Fund (WWTF) said in a conference about digitalization in Vienna that
“…the problem is not that the biggest tech companies misused neutral technologies, but that they build certain principles, values, and interests into their products already in the design phase.”
Who says that Facebook has to have a “like button” turning the platform into a slot machine or that notifications should pop up whenever you haven’t used Instagram for a while reminding you of other friends sharing their memories, turning the platform into a social competition about ‘whose life is cooler’? Stampfer believes that
…the hidden interests coded into algorithms should be taken out into the spotlight, and channel the values of humanism into the core of technological development.
That requires self-control from part of the biggest tech companies, which might be a problematic request from enterprises competing for higher profits on a brutal market. But at least in rhetoric, Mark Zuckerberg, for example, embraced Tristan Harris’ idea “time well spent” as a design goal for Facebook. The latter says that the solution lies in the creation of “humane technology”, which means interfaces that are responsive to human needs and considerate of human frailties. He believes that we should rebuild our social systems and artificial intelligence algorithms to reflect this humane attitude.
|A systemic problem that needs systemic responses – also in healthcare|
That’s also what the Viennese Manifesto on Digital Humanism outlines, putting a specific emphasis on what should be the basis of future technological development. We went through the principles and contemplated on what they mean in relation to healthcare and how they could be adapted to medicine.
|1. Digital technologies should be designed to promote democracy and inclusion|
In the case of healthcare, that should mean technologies accessible to anyone and solutions that go beyond social, financial, or educational barriers.
|2. Privacy and freedom of speech are essential values for democracy and should be at the center of our activities|
Safeguarding sensitive patient information should be considered as a high priority – even more than in the case of “regular” data. As with the appearance of huge amounts of genetic and genomic data, unauthorized access to such health information would not only mean jeopardizing the current state of patients but also meddling with their future.
|3. Effective regulations, rules, and laws, based on a broad public discourse, must be established|
We have been pushing for policy-makers, regulators, and public institutions to step up their game for years. However, what the two-day congressional hearing of Mark Zuckerberg showed last year was that the understanding of technologies among lawmakers is quite superficial – though that’s the first step of effective regulations. But if they cannot make sense of Facebook, what will they do with bioterrorism, artificial intelligence, exoskeletons, or virtual reality treatments?
|4. Regulators need to intervene with tech monopolies|
Beyond the above requirement from lawmakers, The Medical Futurist already recommended setting up a global FDA-like entity. This would enable the effective regulation of the activities of many disruptors who are by-passing the nods of regulatory agencies and just target the right population on online channels no matter in what country they are.
|5. Decisions with consequences that have the potential to affect individual or collective human rights must continue to be made by humans|
We have been evangelizing for years that automated decision-making systems in healthcare should only support human decision making, not replace it. The transparency, as well as accountability of smart algorithms, should be among the first concerns of health tech developers.
|6. Scientific approaches should cross different disciplines & academic and industrial researchers must engage openly with wider society and reflect upon their approaches|
We are advocating for the collaboration of different disciplines as well as that of different groups in society as that increases the efficiency of health technological products. The challenges of diversity are way too prevalent in healthcare, too – just look at the femtech market – and we should do everything in order to close the gaps.
|7. Practitioners everywhere ought to acknowledge their shared responsibility for the impact of information technologies|
The Medical Futurist team has been contemplating the impact of futuristic technologies on society, as well as the medical community for years. That’s why we felt the need to renew the Hippocratic Oath and include this important principle as medical practitioners around the world will be the ones who will work the closest with technology.
|8. A vision is needed for new educational curricula, combining knowledge from the humanities, the social sciences, and engineering studies|
Not only there but also in medicine and healthcare. That’s why Dr. Bertalan Meskó launched a pilot course „Lessons in Digital Health” at Semmelweis Medical School in the 2017 autumn semester to experience how the principles and skill-set necessary for 21st-century doctors could be taught in practice. By now, the course was already applied in several medical schools, such as the UP College of Medicine in The Philippines and the School of Medicine at Marmara University in Istanbul, Turkey.
|9. Education on computer science/informatics and its societal impact must start as early as possible|
The Medical Futurist also believes that STEM education coupled with social sciences and the arts should start as early as possible. Thus, beyond urging kids to take up science, technology, engineering, or mathematics classes, we always draw parents’ and children’s attention to the fact how important the study of arts or philosophy is along the road.
Without a clear understanding of human nature, we’ll get lost in the technological jungle. That’s what happens today – but we hope that by putting humans back in the cockpit through applying these principles as soon as possible, we can mitigate the negative effects of technology and truly arrive at the point where disruptive innovations serve humans.
The Future Is About Empathy, Not Coding in The Medical Futurist
Digital health is growing fast — but at what cost? in Techcrunch