The future looks bright if Generation AI can address cybersecurity
There’s quite a lot of optimism in Generation AI, the IEEE study of millennial parent’s attitudes about artificial intelligence. The findings of the study are evolutionary, not revolutionary, as views towards artificial intelligence have become more refined over the years. Among the findings, the survey shows:
A majority of Millennial parents (80 percent) say AI technology increases their expectations that their Generation Alpha babies will learn faster and more than they did, while for 20 percent, expectations are the same or less.
Three-quarters of Millennial parents say they would consider an AI-powered tutor for their child.
Three-quarters of Millennial parents of Generation Alpha kids say they will encourage their child at least somewhat to consider studying and pursuing a career in engineering given the world-changing activities in that field.
These findings reflect the growing acceptance of robots in the classroom and elsewhere by children, millennial parents, and teachers. Robots were initially used in the classroom to help children with autism and have now been in classrooms as teaching assistants or tools for several years. A separate study by the IEEE found that teachers, “had numerous positive ideas about the robot’s potential as a new educational tool for their classrooms.” These robots, however, were not true artificial intelligence and required programming to perform specific educational tasks.
Robots and artificial intelligence seem to be a positive development to enhance the human experience. An artificial intelligence avatar that can alter its tone of voice can help children to learn social cues and empathy. Similarly, robots can help teach younger children through mirroring, although the effects of introducing robotics or artificial intelligence to the very young have not been studied thoroughly.
However, the optimism shown in this study pre-supposes that we can move past our current cybersecurity issues. Consider the recent effectiveness of the Petya/NotPetya cyber-weapon. An effective third-party could attack the computers on which an artificial intelligence is deployed, thereby rendering the AI and all attached robots effectively useless. This inelegant, brute-force approach could also be supplanted with more elegant attacks that change the behavior of the artificial intelligence by feeding it falsified environmental data. Again, this is an evolutionary approach, like the smokescreen attacks hackers currently use to distract security operations center personnel and machine learning systems while conducting far more targeted attacks.
The study also showed that about two-thirds of Millennial parents would rather have AI help them live independently in their golden years compared to relying on their children. Again, this is evolutionary thinking. We already have robots that work with senior citizens with dementia. These robots can ask elderly patients to tell them stories from their memories and are tireless conversation partners. In the future, ambient intelligence would have the ability to remind people to take their medication, monitor their blood pressure and heart rate, and analyze speech patterns to gauge the patient’s mood.
However, seniors may project emotions onto these robots and artificial intelligence and make decisions based on that perception, even when the artificial intelligence has no emotions and may be unintentionally or intentionally manipulating the user. The primary interaction with artificial intelligence, ambient intelligence, and robots will be via voice, which conveys emotion and builds emotional bonds. This challenge could be partially solved through independent monitoring of patients so that they are discouraged from reading too much into the intentions of their artificial intelligence.
Electromagnetic hardening or shielding is the other significant challenge associated with robots in a home care setting. If a terrorist or rogue nation detonated a nuclear bomb several hundred miles above another country, the resulting shock wave could damage or disrupt electronic components throughout the country. This electromagnetic pulse (EMP) attack would disable poorly-shielded robots, and seniors or other vulnerable populations dependent on home robotics instead of nurses would find themselves without necessary assistance.
If the developers and engineers of forthcoming artificial intelligence software and platforms can move past the numerous inherent security risks, Generation AI presents an appealing version of the future.