In this episode of #LØRN Silvija talks to a professor in assistive technology, Intelligent Systems, and head of the Automation, Robotics, and Intelligent Systems (ARIS) research group at OsloMet, Evi Zouganeli. Zouganeli builds intelligence that resembles that of a human being and explains in the episode that AI actually can function in the real world, in our close environment, and do more than one thing.
Med Evi Zouganeli og Silvija Seres
Velkommen til Lørn.Tech – en læringsdugnad om teknologi og samfunn. Med Silvija Seres og venner.
SS: Hello, and welcome to a Lørn conversation. My name is Silvija Seres, and my guest today is Evi Zouganeli who is a professor on assisted technology and intelligence systems at Oslo MET. Welcome, Evi.
EZ: Hello, welcome. It’s very lovely to be here
SS: Evi, did I pronounce your surname correctly?
EZ: Yes, I was astonished actually, because you didn’t ask me before we recorded, so it was really good. Thank you.
SS: But here I am actually uncertain whether your Italian or Greek or something else.
EZ: I am Greek, and my name is probably Italian, and I’m sure we’re all a mixture of a bit of here and a bit of there, but I’m Greek.
SS: Greek, very cool. So, I’m just going to say a few words about the series, and then I’m going to get on with our conversation. Lørn is doing a series of about 10 new conversations about applied AI, and what we want to do is to show people. We’ve shown quite a lot of entrepreneurs working with AI, we have also shown a few public sector workers on AI, but we want to go broader and basically cover many interesting perspectives of people who use AI in practice today, and it is especially your work related to human and AI interaction for people with dementia and assisting technologies I think is super exciting here. So, it’s a general conversation where I try to learn what’s moving and what’s shaking in assistive AI. Okay?
SS: Excellent. I always start by asking people to present themselves in a personal way, because we have this pedagogical model and theory that people learn from listening to a conversation if they feel like they’re included in it, and they know you a little bit. So, who is this person Evi, and why does she care about AI?
EL: I’ve been working for Oslo MET since 2011. With AI, intelligence systems, at the department of electrical engineering, we have a focus on automation and also medical technology. I have an interest in applications that have to do with, I don’t know, come closer to people, that are really useful for people, that have to do with maybe with health applications, and that’s how I kind of got involved in assisted technology together with collaboration with people from the health department. And then we started with this project that was assisted technology for people with cognitive impairment or dementia and had a field trial at Skøyen Omsorg. And, I mean, this is just one application of intelligent technology. It’s just a start in a way, and it’s kind of this same principles are kind of applied on other things. That was like where I am now, I don’t know. It wasn’t very personal.
SS: If we go personally, where did you come from in Greece, and what made you start in electrical engineering or what you studied – Applied physics?
EL: Yes. I come from Piraeus, port of Athens, but really my parents come from a small island in the middle of the Indian sea. Maybe that’s why I came to Norway, there’s a lot of seas here as well. So, yes, I started applied physics with electronics, and then I did the master thesis in the UK at UCL, University of London in electrical engineering. And the focus of my Ph.D. was more on nanodevices. Then I did a postdoc in Zurich, also in kind of the same stuff. Then I came to Norway and worked at Telenor for 16 years at Telenor research and development. There I worked with broadband networks, intelligent networks, network management, optical systems. So, it kind of changed, the focus. Then I worked one year at Radiumhospitalet, or seven months between jobs. Then this job at Oslo MET came and I started there in 2011, so it’s been a journey to come from, how should I say it, from kind of scripted intelligence in system to artificial intelligence. Which I think is kind of natural.
SS: So, you have a lot of difficult words here. You talk about intelligence systems, you talk about perception, anticipation, human-AI-interaction, and support systems. I need to ask you to translate this into pictures. And talk to people who don’t know what an intelligence system is in engineering or computer science. So, if we just paint a picture around one of the projects that you are working on now. I think it’s easier for people to understand the concretes. Or the projects that you have worked with, this Norwegian research council funded-project on technology-health ethics on assistive tech for elderly with cognitive impairments. What was the project about?
EL: I think the project has so many parts because it was interdisciplinary, it was very interesting, but if I go into details of that we will probably not have time for anything else. I think I will say more about what does it mean for an intelligent system in this context. So, for example in this case an intelligent system would have been a smart home. And there are commercial smart homes, you can buy equipment to have a kind of smart home, so when we speak about intelligent systems generally, we are speaking about a system that can somehow do something automatically, and then the question is; is that intelligent? I mean that’s not particularly intelligent, it’s kind of doing one function that it has been programmed to do. Depending on how you define it you could say that this is not an intelligent system, it’s a very dumb intelligent system. And it can progress to become a really intelligent system, which means a system that has, possibilities and can understand what is going, and can assist you with what you need in the context of what is happening at the moment. And it can learn from what you’re doing and repeat it or adapt it to your preferences. Things like that. It can be from the simplest things, like switch on the lights when you enter the room. And not start flickering on and off like it is where I have my washing machine. When I go to switch on the washing machine the lights come on and when I try to put the soap and stuff the lights come off. That’s not an intelligent system. We are talking about today’s intelligent systems are automation, which are exceptionally stupid. So, we want to make them intelligent so that they can actually assist us instead of giving us headaches every time.
SS: I want to play ball with that image. I have Google home, and I’m surprised how often it just turns itself on, without us intending to do so and starts talking back, and how difficult it is for it to learn, especially knowing how extremely able Google is with AI. I guess there’s still a big disconnect with some of the most advanced applications and what we put into these intelligent systems we call smart homes. And what you are trying to do is understand, if I go back to this project, specific use of an intelligent system, like let’s say a smart home, or maybe even just a room, which could be a very medically enabled room for a particular group of people. It could be elderly with dementia, or people with physical impairments or it could be young children left home alone. It’s part of the system you’re trying to make more intelligent is help it to learn, help it to understand the context, and help it to understand the particular needs of its application.
EV: Yes. One thing I want to say is that you can think of a system that are only software, like this Google thing, the one you mentioned, or you could have a system that are both software and hardware, and when I talk about intelligent systems, I’m talking about intelligent systems that have a body, not just software on PC that has some intelligence. But, for example, a car that is intelligent. We always talk about self-driving cars; they would have to be much more intelligent than they are now to be used in real systems and be reliable and safe and all of that stuff. It’s some way to go before you have that. And it is also the interaction between the software, the hardware, and the people involved and the environment around with the people involved. And then the machine will have to understand and behave and then do the actions that need to be done and make the right decisions – always. So, what I worked on within that project that you mentioned was in an intelligent system and it was, to begin with, with this group of people with cognitive impairment that turned out to not have so much contribution there. So, they were co-producing we said, or collaborating when we designed the system. But that was like one case, but what I am more generally interested in is intelligent systems for other applications elsewhere, so for intelligent cars or intelligent infrastructures like city infrastructure – traffic lights and these kinds of things, and industrial applications. Because also in the industry you don’t have robots that work alone and produce something any longer. The new paradigm is that you have people and robots that need to be collaborating. To be able to do that reliably and safely you need a much more intelligent system. Be it the robot, be it the total system, be it the smart home, be it whatever it is.
SS: It could be a hybrid solution where it’s not either AI or people, it’s the human plus AI collaboration.
EV: Yes, you know I can’t affect the people working with the AI, but the purpose of this is to create AI, intelligent artificial systems that can collaborate with people, can interact safely, and meaningfully. So, they have to be able to do a job that is more than just a repetitive task. Where we are now, a lot of the AI we have used, it is quite impressive I won’t say anything against that, it is really impressive what has been achieved but it is mostly getting loads of data, like medical data, and then doing statistics on it, like getting correlations on it that tells us something about it or whatever. That’s not the kind of AI that I’m interested in. I’m interested in developing intelligent systems that can do things, also more physical things if you like.
SS: So, basically, intelligent systems. Sorry.
EV: Like lifting something or moving something or you know, so you have some sort of automation in the picture. You have some sort of physical system that is being controlled, like a body, a robot, or a car.
SS: Yeah, or a house.
EV: A house yeh! Or at least the inside of the house
SS: Inside of the house, yes. So, automatable systems that not just perform actions, but also interact, because I heard you also say that interaction is a very important part of this.
EV: Yes, absolutely, and that is both physic bottom kind of communication, kind of interaction, like you say something and the app replies, or understands a command or gives back information, but it’s also the physical interaction like if you have a robot or some other kind of robot, it can be any small machine in your vicinity. It can’t start moving around while you’re moving your arms around it. It will need to understand and perceive its environment, understand what is going on, and move in a way so it doesn’t damage you or anything else. And that’s for the physical interaction.
SS: Evi, if we go back to thinking in pictures. If you could just describe a few functions or a few desired end-states of this smart intelligent system for dementia patients, and then I would love you to talk a little about your two pre-projects as well. You have one on personalized remote physiological mentoring, sorry – monitoring. And I think this is going to be a very important part of how we exercise our health in the future. If you can tell us a little about, what does it mean? What sort of gadgets does this combine.
EV: Did you want me to say something about the first project first?
SS: The first project first, yes.
EV: Things take time. In that project what we wanted to build was an intelligent system used in commercial smartphone equipment, so sensors basically. And we wanted it to become, to be able to adapt to the individual user and assist the individual inhabitant in the way that he or she needed. Well, we didn’t get that far. We did the first step, and the first step is perception and anticipation. Perception is needed to understand what is going on, and anticipation, means that the system can predict what is the next thing that will happen. And this is required to be able to interact with you. Like for example, it knows that now you will move from the kitchen to the living room, and it can be aware of that and not be in the way. You need perception and anticipation, and that is the first thing you need for any sort of interaction. We did primarily these, and we used two types of sensors, there are sensors like on/off type of stuff, like you open a carport – it’s on, you close, no other way around, you close a carport it’s on, you open a carport it’s off and similar. But we also used a system that is a low-resolution video camera, and that is a commercial system really, that is a fall detector, but we used this system to use it for computer vision so we could then identify what kind of activity the person was doing with the purpose to assist them. But we didn’t manage to do the last part, we need a second project for that. But we learned a lot.
SS: I’m just thinking, all these things sound very available in theory, and then once you start applying it to concrete cases and you want some flexibility things turn out to be very demanding as you say. But I’m thinking, even just having solutions like, in Norway, we say “komfyrvakt”. It’s basically the thing that makes sure your stove turns itself off properly, but you know as much as they have tried to make this an intelligent system, mine beeps all the time and the alarm gets turned on. Solutions like this would be super-useful to dement people. Or you know detection of people leaving the house, or falling in their bathroom or bedroom.
EV: I don’t think it’s only for people with dementia, I mean I would like to have that working as well. But it’s kind of so many useful things you could have if they were working properly. Because at the moment it could be seen as more of a nuisance, than help. So, my colleagues from the health department interviewed the elderly people and their helpers and care-relievers etc., and it turns out what they have problems with is really simple things. Like, the stove thing, it turns off when you don’t want it to turn off. You just leave the room and then half an hour has gone, passed, and switched off the cooking, and you’re cooking something that lasts more than half an hour and then you come back and “oh no it’s not done.” Or if it is a person with cognitive impairment who could even get very worried, “what happened now, why isn’t it done?” – they can get very confused. So, simple things like that. But this was as I said, this was kind of a first step, what I am interested in is, you mentioned this pre-project in human-artificial-intelligence interaction and collaboration. And we applied for funding from the Norwegian research council on this project. We got very good feedback but didn’t get the money so we’re going to apply again and again and again until we get it. We can’t be talking about artificial intelligence if it’s something that can only exist in a box, you know away from us. If it’s not safe and reliable to be close to us, to communicate with us to collaborate with us, and do things on our terms, rather than us trying to adapt to it. It has to happen, we really need to move to that level before their existence can actually be useful and safe in our environment, you know our closed environment.
SS: So, the main kind of thread in your research going forward is about really making this interaction happen in smarter ways.
EV: No, it’s not about making the interaction. Other people work with that, I don’t work with interaction. I work with the intelligence of the system that will enable it to interact.
SS: I understand.
EV: So, it’s like, how should I say, it’s a box-age intelligence I don’t work with the interaction. Other people work with the interaction, you know, like designing the interaction or all this interface-kind of research. I don’t work with interfaces I work with box-age intelligence.
SS: So, for people who are listening to this and are trying to understand a little bit more about artificial intelligence. If they are working in healthcare, they can think of this as something to be coming in the infrastructure that they will be working with both in hospitals and at homes. People who work in productions and industry can think about their automated production lines as something that will become more intelligent in terms of this interaction going forward.
EV: Yes absolutely, for example in the industry, imagine now. First of all, you don’t have so many things being next to humans and doing things at the same time together with the human, it’s more like the robotics and the production lines they do things by themselves and the people may be in the vicinity, but they do other things next to them. If you had some robotic assistant and it also does stuff together with you which is not repetitive. So far, we use intelligent systems that do repetitive tasks and heavy tasks, but the next step would be that you have systems that come work together with you like your body, you do things together. Then you need a lot of flexibility, and you need a lot of adaptability and understanding to be able to do that and also when it comes to training, nowadays if you want to train, if you want some machine to do something else than it has been doing so far then you need to re-program it. While we need to find ways where you can show the machine, you can demonstrate, and then it has the intelligence to learn the same way a human or a human child maybe would learn by observing. And this is a new part of that. There are many aspects to it.
SS: What’s your estimate on the timeframe for this? I’m just thinking, most of us don’t know enough about it, on one hand, I see there’s a huge development in sensors and in the availability of relatively cheap sensors, computing, and programming libraries to keep this together. On the other hand, maybe, we are learning that these things are incredibly complex. So, where do you think it’s going?
EV: I don’t know, to be honest, the two times that I think that I’ve tried to predict how long it’s going to take for something to be available, I failed by a couple of decades. So, I will not risk another completely wrong prediction. I don’t think it’s very easy to predict exactly. I think also it’s going to be in steps, it’s not like “oh yeh now we have nothing” and in two years suddenly everything is working. It’s going to be smaller steps, and it depends also on who is being involved. Because you see at the moment there are a lot of big companies being involved. And there’s a lot of money in this. There’s a huge economic potential. The moment big players put a lot of money into something it’s going to go faster so I think it’s very difficult to predict. We need small breakthroughs, several small breakthroughs. And as I said, I think it’s going to happen like “oh yeh” suddenly. Like the first time, I saw these, how you call those, electrical scooters that are parked everywhere on the pavement. The first time I saw them I was like “oh did they manage to be like this” I thought we were still struggling with the algorithms for this balancing. So, I have been surprised both ways, sometimes I’ve been like “wow, this happened in like 5 years, I thought it would happen in 20” and other times I think “oh yeh no in a couple of years we are going to have this” and 20 years later we still don’t have it, so we have to see, but it’s very exciting. And I think a lot will be gradually happening. We’re not going to get there in a minute to the, you know to the.
SS: To the moon. But I think it’s really interesting also to see how progress goes in these dumps and sprouts in a way, but also how we often end up with completely different things to what we thought we were going to build. How, you know, how adaptive the whole process of innovation is as well and how we sometimes completely underestimate the effect of something. Like with electric scooters. I think the really interesting thing there will be once this starts working for other kinds of shared transportation tools that are working for old people or kids. But I want to ask you towards the end Evi about two things. One is, where do you see the biggest potential in Norway and you talked to me a little bit about the public sector, I want to ask about whether you see a bit like me, looking in on the system?
EV: I don’t know. I’ve lived in Norway for a very long time, but I guess yeah, we are also outsiders to a certain extent because I mean I believe also in other places you could say that. You know, Norway is a very small country in terms of the number of people, and that is both a disadvantage I guess, but also an advantage. Everybody kind of knows everybody and the threshold for getting in touch with people is lower than in other big places, and if you don’t know someone then someone you know knows someone that you can talk to even in the most difficult places, so that can be an advantage. Another important advantage is that Norway has a strong and also a well-educated public sector that also has a tradition in serving the community, and a strong industry, especially in some areas a very strong industry. It can be very promising when it comes to AI, although Norway is a relatively small country in terms of the number of people. But another thing I want to say when it comes to advantage. I think it’s important that Norway and the rest of Europe have an important role to play in the development of AI. I’m not worried that AI will want to harm us, but AI can be misused if it falls in the wrong hands and that’s why I think that Norway which is a peace-loving country, and other peace-loving countries should be at the forefront of the development to make sure or to do our best so that AI will not be misused. It is a difficult task, but we need to be at the forefront to be able to have the most sustainable kind of attitude or strategy.
SS: I think that’s a super-important point because I think unless we actually compete efficiently with suppliers from China on the infrastructure side of the Silicon Valley suppliers on especially software-side I think we’ll be the ones how our AI-enabled future and our AI-enabled houses and everything else will develop. So, we have to innovate to stay relevant, I guess.
EV: Yes, that too, and also to be a part of the community and be strong partners in this community and not be like “oh yeh, that’s lovely, let us use these and let us use that.” We need to be at the forefront, and we need to anticipate dangers and have solutions for the dangers before they emerge if you see what I mean.
SS: So, if we are to conclude Evi. What do you think is the most important thing for generalists, for people who don’t know much about AI from our conversation. If there’s one thing you can say that you would like them to remember, what is it?
EV: Well, I guess, for example, this line or slogan coined by IBM is “people-literate technology rather than technology-literate people.” So, we want to make the technology intelligent so that IT adjusts to us and collaborates with us on our terms rather than trying to educate people on using technology if you see what I mean.
SS: That’s a lovely picture, I haven’t heard that quote, but technology needs to be adjusted to us rather than the other way around and it’s really up to technologists to define that kind of solution not complain that the users are not picking things up well enough or quickly enough.
EV: Yes. I guess, but of course the technologists together with the designers together with the psychologists, the health experts – it needs to be an interdisciplinary effort to get there.
SS: Agreed. Evi Zouganeli, thank you so much for joining us here in Lørn for this interesting conversation about intelligent systems and applied AI.
EV: Thank you for having me here!
Du har nå lyttet til en podkast fra Lørn.Tech. En læringsdugnad om teknologi og samfunn. Nå kan du også få et læringssertifikat for å lytte til denne podkasten på vårt online-universitet Lørn.University.
Who are you and how did you become interested in AI and the technology around it?
I have a background in nanodevices, then worked with optical networks, so intelligent networks. I wanted to do something useful and switched to applications of tech in health. At some point, it struck me that technology is a dump, and needs to learn to work with us humans rather than the other way round.
Interested in how our brain works and trying to copy its architecture to artificial systems.
What is the most important thing you do at work?
I teach, and I do research. I guess I am a researcher at heart – enjoy doing research, incl. supervising/ guiding (Norwegian word: veilede) students to do research.
Why is it exciting?
It is the required next step for real AI, for being able to actually use AI in real applications in our close environment, it must be reliable and safe. I find this a really exciting area to work with, I guess I am a curious scientist (a bit of a nerd 😊) but I am also motivated by a strong wish (need?) to work with something that has real value rather than generating money for some company or other.
What do you think are the most interesting controversies?
Where do I start... AI is actually at an embryonic stage. There are controversies regarding how to build it, and how successful it may be. And a long list regarding how dangerous it may be.
Your own relevant projects in the last year?
- NFR funded project, now completed. Interdisciplinary incl. technology, health, ethics. Assistive tech for elderly w/ cognitive impairments or dementia. Field trial at Skøyen Omsorg+
- pre-project on AI for personalized remote physiological monitoring in collaboration with a company. (less exciting for me, but very useful. 😊
- pre-project on Human – AI interaction and collaboration (most exciting for me, trying to get external funding).
Your other favorite examples of similar projects, internationally and nationally?
Fascinated by the work of Maurice Conti in the US.
How the brain works, how we understand, reason, learn, decide, etc.
What do we do uniquely well in Norway within AI?
Many good groups, but uniquely in Norway – not sure. I think there is potential in Norway from collaboration with a strong and well-educated public sector; and strong industry, incl automation, that will gain a lot from embracing AI. Combine this with being a small country hence there is a lower threshold for approaching industry – we all know someone who knows someone at the right place.
But for me Norway and other peace-loving countries have an important (crucial) role to play: we need to be at the forefront of development and do everything we can to ensure that AI is not exploited in the “wrong hands”. This will be a very challenging task.
Samle deg med en venn eller en kollega for å se om du klarer å svare på spørsmålet nedenfor.
What does artificial intelligence mean? Name a few examples of how we use it in everyday life.
Want to show off this case to your friends and coworkers?Download summary (Available soon)