• HKU Techie
  • "Does AI have desires, hopes, fears, goals?"

    "Does AI have desires, hopes, fears, goals?"

    A leading authority in the philosophy of artificial intelligence, Chair Professor Herman Cappelen is the director of AI, Ethics and Society MA, the director of AI&Humanity-Lab, and co-director of ConceptLab at the University of Hong Kong. Author of some of the most influential papers in this field, he has also written several books on the philosophy of AI. In this extensive interview with our science editor Dr Pavel Toropov, Professor Cappelen explains how AI is changing us and – how the humanity can survive AI.

    ❓ Dr Pavel Toropov: You are a philosopher. It is easy to see why we need a computer scientist, an engineer or a mathematician when it comes to AI, but what is the role of a philosopher?

    💬 Professor Herman Cappelen: We, philosophers, have spent centuries trying to understand what it means to have a thought, a desire, a goal, a fear, and - love. Now we use those theories to investigate whether AI can have thoughts, desires, hopes, fears, and love. You are not trained to answer these questions if you are a computer scientist.

    As we build more and more powerful AI, we want to know about its capacities. For example, if you create something that has desires, consciousness, awareness and emotions, then you might have made something that’s an agent. You then might have moral and ethical responsibilities towards it. Again, these moral questions are not something computer scientists are experts on or even trained to think about.

    If you create something that has its own goals and desires, is really, really smart, and is running our economy, electricity system and the military, you’ve created a risk. That’s why people talk about AI risk and develop strategies to make us safe from potentially dangerous AI.

    ❓ So, are philosophers “psychologists” who help us understand what is going on “in the head” of the AI by providing analysis beyond what algorithms and mathematics can give us?

    💬 Psychologists are primarily trained to answer questions about humans, human fear, human anxiety, human development… They are not trained to think about or experiment on things made from silicon. With AI, there is now a whole new field that takes the concepts we use to describe the human mind and apply them to a new kind of creature. The bridge, from talking about human cognition to AI cognition, is built in large part by philosophers.

    It might be that we need a whole new terminology for the “psychology” of AI. Maybe AI does not want things in the way that we do. Maybe AI does not hope or fear the way humans do, maybe it has other (psychological) states, different from ours.

    ❓ What does your work on AI involve?

    💬 One part of my work is about understanding the way things like Chat GPT understand language, the ways AI can and cannot communicate, the ways in which it can be similar to or different from humans in its grasp and construction of language. We want to know if they are communicative agents as we are, or if they are a totally different thing.

    ❓ What do you mean by “totally different thing”?

    💬 Some theorists claim that AI systems don’t understand anything of language. On that view, they are like parrots that produce human speech, but don’t understand what they produce. Something goes in, something goes out, but there is nothing there.

    I worked on the nature of language and the nature of communication long before anything like Chat GPT came about. When Chat GPT came out I thought it was incredibly interesting from the point of view of my research, because now we could have created systems totally different from us, that can communicate. This gave us new insights into the nature of language and communication.

    ❓ What kind of insights?

    💬 Almost everything we know about language and communication comes from studying humans, but now it turns out that there are many other non-human ways to communicate. ChatGPT is doing it, and maybe even better than us! I wrote a book on this topic: “Making AI Intelligible”.

    (Making AI Intelligible can be accessed here: https://arxiv.org/pdf/2406.08134)

    ❓ In what ways can Chat GPT be better at communicating than humans?

    💬 It does not get tired. It is not going to get up and leave because it has another meeting to go to. It remembers. It always talks about what you want to talk about. Its ability to process and produce conversation is much better and faster than ours. Chat GPT can produce everything I will say in the next half hour in ten seconds, and it could process this entire conversation in 2 seconds.

    ❓ In addition to language, what else interests you, a philosopher, about AI?

    💬 Does AI have beliefs, desires, plans even emotions? Does it want anything? Does it want to answer our questions? Is it capable of having a desire and goal? Can it be held morally accountable? Do we have moral obligations towards AI? Are they potentially our intellectual superiors and if so, how do we react to that? My next book on AI: “The Philosophy of AI: An Opinionated Methodological Guide” is in large part on these issues.

    ❓ So, does AI have goals?

    💬 I think the answer to this is yes.

    ❓ Why?

    💬 Because AI exhibits the kind of structural behaviour that we would consider as having a goal in humans. It behaves structurally as if it had a goal and plan and some beliefs and then acts on those beliefs.

    ❓ Could you clarify?

    💬 For example, if you ask it questions, and think – why is it giving me answers? Well, one way is to say AI wants to answer your questions, it understood your questions and it thinks that the answer is “bla bla bla”, and so it says it “bla bla bla”. That’s the kind of explanation that we use for humans.

    The people who disagree say: “Yeah, but that’s not really what’s going on, because it is really just processing symbols and just predicting what the next word is.” I think this is a horrible argument because you can say that about humans too: “What’s really going on is just processing in some fatty tissue inside the brain”. That would be a bad argument for the view that humans lack goal directed behaviour. It’s an equally bad argument in the case of AI.

    ❓ So, do you think that AI can have a mental life?

    💬 I’m confident that people who say: “AI cannot speak a language, it cannot have a mental life”, are basing their views on bad arguments. The big picture is that I don’t think we can be sure about the answers until we have resolved some extremely hard philosophical questions.

    We used to think that non-human animals couldn’t think, plan, or have emotions. Now we are much happier to accept that elephants and dolphins, cats and dogs, can communicate and have rich cognitive, emotional and social lives. You don’t have to be a human to have these. Mental and cognitive life can exist in things very different from us. But then there is a big leap – could it also exist in something non-biological? I am not convinced by the arguments that it cannot, and I think there are strong arguments that it can.

    ❓ One of the main worries about AI is that it will replace humans in every job imaginable. What is your view on that?

    💬 Some jobs will go very fast. There are a lot of skills AI cannot do now, but this will change fast. People say that AI makes mistakes, but I am completely unmoved by that. Humans make mistakes much more than AI and the rate at which it improves is unbelievable. Chances are that those deficiencies will disappear really fast.

    But surprisingly, much will stay because we want human to human interactions. And in more areas than you might think! Let’s look at chess. Humans are much, much worse at chess than even a cheap computer. The best chess player, Magnus Carlsen, can never beat the best computer.

    But the important thing is that nobody wants to watch two computers play each other. People want to play people and watch other people doing the same.

    I asked my 13-year-old daughter: would you listen to a musician who is an avatar that creates the same music that you like? (She said) Of course not! We care about people and a lot of the things we do is about people. I would not talk to you and care much about this interview if you were an avatar sitting there in front of me.

    ❓ Let’s talk about the risks that AI poses to humanity, this is another area of your work, correct?

    💬 Will the super intelligence turn on us? I don’t know if it will do that, but it is possible.

    I have a new paper that's co-authored with Simon Goldstein. It’s called “AI Survival Stories”. It is about the ways in which we can survive AI.

    There are two broad categories of how we can survive – one is what we call plateau stories and the other non-plateau stories.

    One kind of plateau is technical. We keep developing AI over the next few years, but nothing significantly better happens. For some reason that we could not foresee, the technology does not keep evolving, it flatlines.

    Another plateau story is cultural. Maybe we’ll eventually treat dangerous AI the way we now treat biological or chemical weapons. One way that could happen is that someone uses AI in very dangerous ways that has horrific consequences. After that, AI is perceived as a threat to humanity and there’s a worldwide effort to block the creation of dangerous AI.

    The other scenario is that there is no plateau, and the AI becomes super intelligent and super powerful. And then there are two ways in which we can survive. Either we make sure that AIs are aligned with our values and are always nice to us. The other option is that we can control the AI systems and somehow make sure that they can’t harm us.

    Technical plateau is not super likely. Cultural plateau is not that likely. Alignment is incredibly difficult. Trying to control a super intelligent AI seems hopelessly difficult. What you think of the probability of humanity surviving AI depends how likely you think these various survival stories are.

    ❓ Replacing human jobs and threatening the humanity aside, AI is now affecting our daily and emotional lives. In the 2013 film “Her,” a man falls in love with an AI chatbot. This seemed like pure fiction then, but now we are not so sure. Can AI change us?

    💬 AI can change us in very unpredictable ways. It changes our language, changes how we classify the world, changes what we think is important in the world, and changes how we think about each other. It is all very new, only two years old, but the change is unbelievably fast. I have never seen anything change that fast.

    There is now a huge market for romantic chatbots - for boyfriend, girlfriend, partner, whatever bots. People can now develop a super emotional, romantic relationship with a chatbot.

    I was just at a conference in Germany where one of the speakers had created an AI robot to help interact with people with dementia. It was remarkable how the patients often prefer that to the overworked and stressed nurses.

    There is something great about these chatbot friends. They are never tired; they are never in a bad mood. They don’t ask for anything in return. If we get used to that as a model of friendship and intimacy, might we expect that from humans too: why is he or she not like a chatbot?

    Therefore, one thing that I think is happening already, words like “friendly”, “empathy”, “relationship” are changing in meaning. Our classification of emotions and cognitive states is in flux, changed by our interaction with AI systems.

    Our ConceptLab at HKU is about how language changes. And these kinds of language changes happen when we are in new situations. Now we are in a new situation – one where we can talk and engage with AI systems.

    ❓ Continuing on the subject of AI films, which films about AI do you think come closest to reality?

    💬 The Matrix is an amazing film, a brilliant illustration of a very important philosophical issue.

    AI has created an illusion of life, and you are not a person but a figment of a big computer system. It is something that philosophers think about, and the Matrix is an ingenious illustration of that possibility. Philosopher Nick Bostrom has a super famous paper arguing that the probability that we are in a matrix is pretty high.

    It is a very simple calculation – what is the probability that AI systems will be able to generate a completely realistic world? Pretty high. How many of these are there going to be? Probably a lot. How many real worlds are there? One. So, what’s the probability you are in a real one? Pretty small.

    ❓ And finally, will AI replace philosophers?

    💬 I’m not sure. There will probably be AI systems that can generate philosophical arguments better and faster than any human. What I’m less sure about is whether people will want to talk to and interact with this system.

    Recall the point from before: lots of people want to pay attention to Magnus Carlson playing chess and no one cares about the games played between chess computers. Maybe philosophy will be like that? I think we’ll be harder to replace than, say, computer scientists. Their function is to produce a certain output and if something can produce it cheaper, faster, and better, replacement is likely.

    Philosophy has human connection at its core and so is harder to replace. But maybe not impossible.

    👏 Thank you, Professor Cappelen!

    🔍 Many of the questions discussed in this interview are part of the curriculum of the AI, Ethics and Society MA at the University of Hong Kong.

    For more information: https://admissions.hku.hk/tpg/programme/master-arts-field-ai-ethics-and-society

  • "What was very difficult in the past is no longer difficult today"

    "What was very difficult in the past is no longer difficult today"

    Professor Lingpeng Kong’s research at HKU’s Department of Computer Science focuses on natural language processing (NLP). Before joining HKU, Professor Kong worked at the AI research laboratory Google DeepMind in London.

    ❓ Dr Pavel Toropov: Your research profile says that you “tackle core problems in natural language processing by designing representation learning algorithms that exploit linguistic structures.” What does it mean, in simple terms?

    💬 Professor Lingpeng Kong: We teach computers how to understand human language and speak like a human.

    ❓ And what is the main difficulty for a computer in doing that?

    💬 The ambiguity of the human language. Humans have a lot of ambiguities in our speech, for example: “the man is looking at a woman with a telescope”. Does the woman have a telescope? Is the man using a telescope to look at the woman?

    This is called proposition attachment problem. Modern language processing is built around the idea of statistical methods, but you always have a lot of boundary cases that you cannot fully and efficiently model.

    ❓ Humans figure out such boundary cases easily, from context. Why cannot computers do that?

    💬 Because there is an exponentially large space to search. We must find efficient space within the boundaries of computation recourse and memory. That’s the difficult part, to build a statistical method to model that stuff.

    Also, it is difficult with low resource languages. For example, Swahili - we don’t have enough data to train the system to work efficiently.

    I think the good thing is that with current development of deep learning we can build models with large exponential value, and we can solve a lot of problems that in the past we could not imagine that we would be able to solve. That is why people are excited about AI.

    You learn about things and you learn to generalise into things you have not thought before.
    It is a matter of what model, what algorithm can generalise the best from less data, less computations. Nowadays we need very large data to train systems, basically the whole of the Internet.

    ❓ You also work on machine translation. The quality of machine translation seems very good now, much better than just a few years ago.

    💬 I feel like the problems with machine translations have been solved! It has been developing very fast. Ten years ago there were translation ambiguities that you could not solve well, but today we have large language models.

    Chat GPT translates really, really well! I think when it comes to technical documents and daily use email, it does better than me in Chinese to English translation. Nowadays, if I write an email in Chinese and translate to English, I only have to modify very, very few things.

    ❓ So will translators be replaced by AI?

    💬 I think it is already happening now. Technology has advanced so far that some of the very difficult things in the past are not that difficult today.

    Machine translation is just conditional language generation – for example, conditioning on the Chinese part to generate the English part and represent the same meaning. There are a lot of conditional generation problems like this - condition on your prompts to generate the next thing.

    Everything is inside one model now, the big language model. Before, question answering has its own system, machine translation had its own system, so did creative writing… but now it is all the same system, it is only the prompt that is different.

    ❓ What prevents machines from understanding humans?

    💬 Nothing, but here is always a philosophical debate about what the true understanding is.
    Do you feel that Chat GPT understands you? Microsoft Copilot? To a certain degree, I do, yes… I did not expect this just three years ago, but it happened. But is it true understanding? I don’t know.

    I like to do tests - I give song lyrics (to AI) and I ask: what does this mean? And it tells me, for example: sometimes times are hard, but things will be better. I still feel that it is not quite a human being talking to me, but maybe because I know that the result is coming from a lot of computation.

    But if you do what is called Turing Test – differentiate between talking to Chat GPT and talking to a human being, then it is hard, really hard. I don’t think I can guess right more than 60 or 70% of the time.

    ❓ What allowed the AI to be able to communicate like that?

    💬 We had never, in human history, trained a model of that size before. Before COVID, the largest language model had roughly 600 million parameters. Today, we have the model, the open source one, with 405 billion parameters. We never had the chance before to turn this quantity of data, such large amount of computation, into knowledge inside computers, and now we can.

    ❓ What is the current direction of your research?

    💬 Our group works mainly on discovering new machine learning architecture. When you talk with Chat GPT, after about 4000 words, it forgets. The longer you talk to it, the more likely it is not to remember things. These are fundamental problems with machine learning architecture sites. This is one of the things we are trying to solve.

    The machine learning model behind Chat GPT is called Transformer. It is a neural network. It can model sequences used everywhere, for example in the AI program called AlphaFold that works with proteins.

    One direction of our work is – making Transformer better in terms of efficiency, in terms of modelling power, so that we can we have Transformer that works with ultra-long sequences and does not forget.

    The second direction is pushing the boundary of reasoning limits of the current language models. I have a team working on problems from the International Mathematics Olympiad. We can now use large language models to solve those problems. It is doing really well.

    👏 Thank you, Professor Kong!

  • "From a brain image, AI can read out the emotion that the person is feeling"

    "From a brain image, AI can read out the emotion that the person is feeling"

    Professor Benjamin Becker studies the human brain. His innovative, cutting-edge research has been published in top journals such as Nature Human Behaviour and the American Journal of Psychiatry. In this interview with our science editor Dr Pavel Toropov, Professor Becker talks about the breakthroughs that Artificial Intelligence made possible.

    ❓ Dr Pavel Toropov: What is the main direction of your research?

    💬 Professor Benjamin Becker: Trying to find out how the human brain generates emotions, and what happens with these processes in people with mental disorders and how we can make this better.

    ❓ How do you use AI?

    💬 AI allows us to make very big progress in analyzing brain images. The brain is a highly complex structure, probably the most complex structure in the Universe. We are looking at the biological architecture of the brain made from billions of neurons with billions of connections. Humans, because our cognitive capacities are very limited, struggle to make sense of these very complex patterns.

    AI can help us greatly in finding patterns in this data. We recently discovered that we can use AI inspired procedures to read emotions from the brain. AI can find out if someone feels afraid or feels disgusted, for example. Using human brain power this is nearly impossible. We need complex algorithms to help us make sense of this complex data.

    ❓ How is this done?

    💬 We put individuals in MRI scanners to image their brain activity while we induce specific emotions. What we, humans, can only see (in the brain scan images) is that particular brain regions become active. But this is too simple, and AI allows us to see more complex patterns and read out the emotions that the individual experiences.

    ❓ Can you specify?

    💬 We, humans, see that specific regions in the brain have become active, but these are rather big structures, and what AI can do is screen those structures on a much finer level than humans can, and then use the data to generate complex patterns – like the fingerprints that specific emotions have left on the brain.

    Most amazingly, based on these patterns that it sees in the brain, the AI can read out what the person feels at a given moment. For humans this data is too noisy and too complex. A human interpretation is just not possible. AI gives us the cutting edge.

    ❓ So, for humans, basically, a brain scan is too noisy - blurry, messy, to see pattern in it. But AI can look at a series of such brain scans, see through this noise, and say – these are all the same, and these people are all feeling, say – fear?

    💬 Yes. AI can even take advantage of this noise to make very good predictions.

    ❓ How cutting edge is this? This was not possible just a few years ago, right?

    💬 Yes. I think there has been progress on three sides. Progress with imaging technology, MRI. Then we have progress in terms of what we know about human emotions, and the third is the progress in machine learning and AI.

    ❓ Where can this take us in the future?

    💬 When MRI was developed 30 years ago, people said: in ten years we will have understood the entire brain. This did not happen. It was too optimistic. The brain is still the most complex structure in the Universe, so to understand it will still take some time.

    I see more progress from AI in applications. In basic research we look at how emotions are processed in the brain, we look at mental disorders, because that’s when emotions dysregulate. Patients with depression, addiction, they have problems controlling their emotions and they feel these negative emotions very strongly.

    Our hope is to map these emotions in a healthy brain, and then apply AI to support the diagnostics of mental disorders. We now see advances where AI can help make good diagnosis – for a medical doctor it is difficult to decide – does this patient have depression, anxiety or something else?

    AI could provide us with a probability value – for example, 80%, that this patient will respond to this treatment and not to this one. We will be able to make huge progress, reducing the duration of patients’ suffering and also reducing the cost for the health care systems.

    The second thing – using AI, you can have subgroups of patients and make better recommendations for their treatment.

    ❓ What do you mean by subgroups of patients?

    💬 Working as part of a large collaboration, we have recently shown a lot of variation in the symptoms and brain alterations in adolescents with depression. Using findings like these we could target different brain areas, or provide different treatments (according to the subgroups). Some patients would, for example, respond better to behavioral therapy, others to medication, others to brain stimulations.

    ❓ Can this technology be used clinically, in the real world?

    💬 What we see is that there is good progress, but right now AI is not precise enough for clinical diagnosis level. This is about human life. Perhaps soon we will be able to use AI to make recommendations, but now the predictions are not precise enough to enter clinical practice, we need to have a high level of certainty.

    ❓ There is a lot of fear about AI replacing humans in many jobs. Do you think that in the future AI can replace psychologists?

    💬 I think in the next 10 years I will be able to get away with it! I am not concerned for psychology and I would recommend students who have an interest in psychology to pursue it.

    ❓ Why are you so confident?

    💬 One area where AI will not overtake us is in understanding other humans, communicating with other humans, bringing humans together and treating humans as therapy.

    ❓ Is there a scientific basis to this?

    💬 Yes. An area where we see more and more research is AI interaction with humans. We recently did a study about our trust in other humans and in AI.

    From very early on in our lives, we are very sensitive to whom we can trust. Evolutionary this is very deep. If you don’t have this skill, your ancestors probably did not survive for long, because they trusted the wrong people or because they did not trust anyone.

    We showed that there is a clear brain basis for our trust in other humans. At the same we assessed people’s trust in AI. We asked them – do you trust AI? We saw that these two “trusts” are not related! Moreover, trust in humans was associated with specific brain systems, but we did not see a brain basis for AI.

    We have learned as species to trust each other. This is ingrained in our biology. But with AI, even though it is somehow human-like, it has only been there for a couple of years. How can we know whether to trust it or not?

    👏 Thank you, Professor Becker.

  • "Generative AI will change how reality is presented in film"

    "Generative AI will change how reality is presented in film"


    Ulrich Gaulke is an award-winning documentary filmmaker who has taught his art across the world, from Bosnia to Bolivia. Last year he took a position as a senior lecturer at the Media and Journalism Center at the University of Hong Kong. One of the courses he teaches is Generative AI for Media Applications, for which he has been awarded the Social Science Outstanding Teaching Award. In this interview with our science editor Dr Pavel Toropov, Ulrich Gaulke talks about this new course, the role of AI in filmmaking, and why AI cannot yet replace a human storyteller.   

    ❓How did this course - Generative AI for Media Applications, come about? 

    💬I studied computer science - a long time ago! And I am interested in technical things - like AI. I had an idea - how I can develop a new, pioneering class where students are storytellers, but they develop story-telling skills by using all the newest applications of Gen AI. 

    ❓What do students do in your course? 

    💬First we do introduction to AI - how large language models work, how a diffusion model that creates images works… basic understanding of neural networks. Students learn what happens with the data inside an AI model and how the model creates a proper outcome.  

    Then, the students build, and feed, their own AI model with data - pictures. The students take pictures of Hong Kong with mobile phones to feed their own diffusion AI model. Based on this AI model we can create more Hong Kong-related stuff.  

    These are the beginnings. Then, step by step, we go through Gen AI applications: text to text, text to image, text to video, text to animation, text to speech, and text to music. After this the students can create stories and video sequences, and can start working on the final project.  

    The final project is a 3-5 minute short film, using different AI applications. They put all their AI skills together to assemble something complex, working with text prompts and reference images to create video sequences. The main Gen AI applications that we use are ChatGPT, Midjourney and RUNWAY Gen3 Alpha. 

    But first, the students must create a proper story, then divide it in different parts - like a story board in a fiction film. Each chapter, location, characters, everything must be visualised. I let students start with the final project only when I agree with the story. If their story has something poetic, revealing, touching, then I let them create characters, set ups, visuals. 

    We also have weekly assignments - to create something. We also discuss the ethical aspects of AI and technical background. I wanted to include people working with AI and combine their experience with my perspective of a storyteller. So, I have guest speakers from the computer science department. 

    I also invite guest speakers who are leading figures in AI and media, for example Professor Sylvia Rothe, Chair of Artificial Intelligence from Munich University of Television and Film in Germany. 

    We established a pioneering course, nobody else does this kind of work. 

    ❓How popular is this course? 

    💬It was booked out immediately. But more students want to attend! Journalism and media students are a priority, and so we established a summer class in June. It has the same content, but the course is open to students from other departments and faculties.  

    ❓You teach this course at the Media and Journalism Center at HKU. Is the course for filmmakers or journalists? 

    💬The course is open to both journalism students and filmmaker students, but our work is focused on the creation of works of fiction, not journalistic work.  

    Journalism is fact-based. If you are a journalist, you have to be very responsible in your use of generative AI. An AI model is not a research tool like Google. We teach that with AI, you cannot trust the outcome - an AI model does what it wants, creates its own patterns. The result may look very detailed, but it needs to be checked. 

    ❓So, what is the main use of AI in filmmaking?  

    💬Filmmakers can use it creatively as a visualisation tool - for visualising something that is not possible to shoot, for example something that happened in the past, something that you have no video materials for.  

    Animation has traditionally been used for this purpose, in combination with powerful, real stories. For example (the animated film) FLEE, about a family from Afghanistan escaping to Europe, was nominated for three Oscars. It is based on a real story but is fully animated.  

    ❓Do you use  AI in your own work as a filmmaker? 

    💬I am now using it for historical re-enactments in my latest documentary about five 100-years-old ladies. They are talking about their past, but they only have still pictures from the time when they were very young. So, I can create video sequences based on these still images.  

    ❓Storytelling is key to making films. Do you think this will be done by AI at some point? 

    💬Yes and No. It depends on what your expectations are. ChatGPT can help you write a story, but the main constellation – the plot, the characters, this must come from you! If you let AI create everything by itself, then you will see that it is just mimicking something that already exists. Stories are what humans use to communicate with each other. A good story must include something unique, surprising, that has something to do with your own life.  

    AI is based on patterns, these patterns come from the learning material, and the learning material is based on what has already been made…  A script for a TV soap opera is based on very simple elements, and writers write the same stuff every year, so that is something that AI can do. 

    ❓A little while ago, when text to video applications came out, there was talk that AI will now make films for us, eliminating the need for actors, directors…. This does not seem to be happening.  

    💬A lot of people are giving up on that idea. 

    ❓Why? 

    💬The expectations are too high. Try giving AI a simple story to do. For example, a teacher is angry at a pupil, a little girl. Try to keep consistency with both characters, try to bring them into a serious conversation – for example, the angry teacher tells the pupil that there is something wrong with her homework. It is a very simple story but try to create it with AI – and it becomes very complicated!  

    Try to find a video on YouTube, one that can make you forget that it was created by AI. It is always more than obvious that a video was created by AI – a character disappears, another appears randomly, there are many random actions, there are aliens... 

    ❓So, you see AI as a tool to create visuals for a creatively written story, one done by a human? 

    💬If you let AI do something on its own, then it goes weird, random. AI-created work is totally different from our idea of creativity.  It´s more like a dream. 

    If you want to use AI application as a tool to create something that is based on our understanding of storytelling, of creating characters, of emotional expression, then it is very hard. Consistency is the problem – the movement, and the facial expressions of characters are not consistent. 

    It is very hard to develop a character using an AI model. You can create a realistic photo using Midjourney – for example of an old guy who is looking sad. The AI model will create an image of an old guy looking sad, but is this the old guy that you want to use in your story? Or is he completely different? 

    ❓Does AI allow you to fine-tune these discrepancies? 

    💬Gen AI can do very impressive things, but it is not like applications such as Photoshop or After Effects, where you have direct control of the outcome by changing the parameters. 

    With Gen AI, if the result is something that you cannot use for your work – then try to change something, and – it is impossible! You can instead create something else, something new, but it can differ, again, from what you expect. You can become more and more frustrated, because you do not have direct control of the outcome. 

    What you can do with AI is write another prompt. But you cannot be sure that the AI model will give you exactly what you want.  

    ❓Where would the skill of operating AI tools be? We know what a good photoshop operator can do, what is the AI equivalent? 

    💬The equivalent is an AI operator who is very experienced in writing prompts. The communication with an AI model needs AI communication skills and this means prompting, prompt design: how can I design the prompt to make the AI model to fulfil my expectations? 

    ❓Do you teach prompt design in your course? 

    💬I try, in each lecture, to talk about prompt design. But, the more complex the outcome needs to be, the more skills you need to write a prompt. 

    There are prompt design tools: you can write something that is not really good as a prompt, and this tool can turn your idea into a proper prompt. I teach that too. 

    The students must keep in mind that it takes a lot of work to design prompts. So, they must be prepared – have a proper story ready, and only then spend time to create, using AI, the right visuals for that story. 

    ❓What is the plan for the future of this course? 

    💬The AI applications keep improving. Last year we were mostly focused on managing the challenges and the difficulties of all the AI applications. This year we can expect more from them. Runway, the video application, is more advanced, so now we can do more with storytelling.  

    I want to make sure that the students develop better storytelling skills. Then, they can use the AI applications that are now more advanced, which means that we can create more sophisticated visuals.  

    👏Thank you, Ulrich! 

  • "Breakthroughs in AI depend on how good you are at maths"

    "Breakthroughs in AI depend on how good you are at maths"

    Professor Yuan Xiaoming works in the Department of Mathematics at HKU. An accomplished mathematician and scientist, he was three times voted highly cited researcher by Clarivate Analytics. Professor Yuan’s main speciality is optimisation, and he applies this expertise in the field of artificial intelligence. In this interview with our science editor Dr Pavel Toropov, Professor Yuan explains the importance of mathematics in AI.

    ❓ Dr Pavel Toropov: What is the role of a mathematician in AI?

    💬 Professor Yuan Xiaoming: Our role is fundamental, a foundation. I can prove it – when people talk about AI, you usually only hear about the engineering side, the programming, the implementation of the product. But – breakthroughs in AI depend on how good you are at maths! What does AI mean? Intelligence created by people. And how do you get intelligence? One answer is maths!

    ❓ What do you mean by this?

    💬 AI is artificial – fake intelligence, intelligence in the machine. AI is not fake, of course, but it is computational intelligence. And, who tells the computer to generate this intelligence? Humans, people. So, you need human intelligence – in your brain – to do AI better. And maths is the best way to improve your intelligence, your level of thinking, your logic.

    ❓ So mathematics is like training, going to the gym, but for the brain?

    💬 Yes. Mathematics is brain training.

    ❓ What is your main area in the field of AI?

    💬 Optimisation (algorithms). There are a lot of optimization problems in the AI industry. For example, if we want to minimise the bandwidth cost for a livestreaming business, that’s an optimisation problem.

    You understand the questions from a maths perspective and design fast, efficient and robust algorithms to solve them. These are purely maths problems. We work on such problems.

    ❓ You also work a lot with the industry, designing algorithms for commercial application. How do you use maths there?

    💬 For example, something that everyone is talking about now – LLM, large language models, like Chat GPT and variants of that. In LLM there are several stages – pre-training, post-training, fine-tuning. Each stage has a lot of optimisation problems.

    For example, LLM has a lot of connections between different neurons, and in the post-training stage you have to cut some of these connections to save the hardware computation facilities, like memory. Which neurones can be cut off? We design a mathematical model to help do this and it can save a lot of computation facilities – and that’s money.

    ❓ You also work with AI chips. What is an AI chip?

    💬 Traditionally the concept of computer chips is just hardware. But now people integrate algorithms into the chip so that it works faster. We design algorithms into the chip to accelerate the computation in the chip.

    One type of standard, representative work is vectors and matrix decomposition, matrix and vector multiplication. We design specific algorithms to try to make the computation of vectors and matrix more efficient.

    Sometimes a structure has to be introduced into the chip. For example, the most popular chips, like Nvidia A100 and H100, have specific structures - that’s why they work so well. One typical structure is sparse tensor core, designed to accelerate sparse matrix computations, and we have to design algorithms to fit into such hardware structures.

    ❓ Where does HKU compare to other institutions when it comes to innovation in the field of AI and mathematics?

    💬 HKU is a good place to do AI-related topics. I am a maths professor, and for AI we need to do a lot of maths. Theory-wise we are quite strong – HKU attracts good postgrad students, and I am very happy to be working with such good students. I also launched an AI program with the Department of Statistics and the Department of Computer Science. This program provides new resources, manpower for our projects.

    ❓ Tech jobs – software engineers, programmers, are coveted because they offer good pay and are in demand. Are there career opportunities for mathematicians?

    💬 I can see a direct link between maths and commercial value. I know that many people do not have any idea about this – maths is just formulas on paper, they have nothing to do with business, but I am an example – I save money for industry.

    I am a mathematician, and I can help industry earn a lot of money. I recently helped Huawei, with my algorithms, to save USD108 million, in less than three years. That’s commercial value! I designed the algorithms which helped them save money because of benefits costs for their livestream business, which is one of the most important parts of the digital economy, as business professors like to call it.

    And – what is the specific meaning of “digital economy”? Digital means numbers, numbers on the computer. Creating meaningful knowledge from numbers depends on numerical algorithms, so it depends on mathematical knowledge.

    Therefore, the mathematical foundation for AI is really important. We are not doing AI incrementally, little by little – mathematicians can help industry with breakthrough ideas! The importance of mathematicians in AI must be highlighted.

    ❓ Final question: would you recommend mathematics as a career?

    💬 Of course!

    👏 Thank you, Professor Yuan.

  • "Our models can predict cancer treatment response"

    "Our models can predict cancer treatment response"

    Professor Lequan Yu is the director of the Medical AI Lab at HKU and his work lies at the intersection of AI and healthcare. Before joining HKU, Professor Yu was a postdoctoral research fellow at Stanford University. In this interview he explains to our science editor, Dr Pavel Toropov, how AI can revolutionise healthcare.

    ❓ Dr Pavel Toropov: How do you use artificial intelligence in your research?

    💬 Professor Lequan Yu: We use AI technology, AI algorithms to solve problems related to healthcare and medicine. We rely on multimodal AI models, using AI to analyse and integrate different medical data, such as medical images, medical reports, lab test results and genomic data. The aim is to interpret and integrate them together to help doctors make decisions.

    ❓ Could you provide an example?

    💬 For example, using an algorithm to see if the patient has cancer or not from computed tomography (CT) images. This can reduce the doctor’s workload. Also, we want to do precision medicine, especially for cancer patients. Currently, treatment strategies are not really tailored for individual patients. We want to use AI algorithms to integrate the diverse information about each individual patient and then let AI make recommendations for doctors.

    ❓ What data would AI integrate?

    💬 Radiology data such as CT scans, MRI and also pathology images – microscopic images. Recently, we have been exploring how to integrate genomic data.

    ❓ What do you mean by “genomic data”?

    💬 Broadly speaking, this refers to DNA, RNA or protein data. For example, we work on gastric cancer. We get samples of cancerous tissue, and do genetic sequencing or molecular testing to obtain molecular information, for example, about what subtype of cancer it is.

    Then the AI algorithm integrates this information with other information, such as imaging information and general lab test information, and puts it all together to make a more comprehensive prediction about the condition of the patient.

    For cancer, take gastric cancer for example, there are different treatment strategies, for example immunotherapy. But, we do not know whether this strategy and treatment would benefit this particular patient. Because some strategies may not. So our AI algorithm can predict treatment response in this particular patient and also provide survival analysis.

    ❓ In healthcare, what does using AI allow humans to do that they cannot do alone?

    💬 Two examples. One is chest X-rays. A doctor can do the analysis very well, detecting pneumonia, for example. AI can also do it, and the reason to use AI is to help reduce the doctors’ effort and workload.

    But for cancer image analysis, that is different. Doctors can estimate potential survival or potential treatment response from the image. But this is quite subjective, based on the doctor’s experience. AI has the potential to evaluate this more quantitatively and objectively.

    ❓ Can this technology be used in clinical practice now?

    💬 Currently for oncology this is frontier research. There is still a way to go before putting it in clinical practice. But, after we incorporate genomic data, we think it will be more workable. Perhaps in to 10 years this will be applied in clinical practice.

    ❓ Other than cancer, in the treatment of what other conditions can AI help?

    💬 Cardiovascular disease. Here AI can play an important role. When it comes to certain heart risk disease predictions, it is less challenging than with cancer. Moreover, AI can integrate and analyse chest X-ray images, and here the accuracy of AI is very high, over 90%.

    But still we have issues regarding privacy, ethics and medical regulations before we can apply it in clinical practice.

    Collaboration is very important – we must collaborate with doctors, hospitals, medical schools. It is the best way to apply AI technology to solve real-world problems, address real-world needs, and help our society, medicine and economy.

    👏 Thank you, Professor Yu.

  • "We create a new reality"

    "We create a new reality"

    Professor Xiaojuan Qi works at the Department of Electrical and Electronic Engineering at HKU where she is a member of the Deep Vision Lab. Her work covers deep learning, computer vision and artificial intelligence. In this interview with our science editor, Dr Pavel Toropov, Professor Qi talks about self-driving cars and building virtual worlds.

    ❓ Dr Pavel Toropov: What is the main direction of your work?

    💬 Professor Xiaojuan Qi: Computer vision and artificial intelligence. To put it simply, computer vision means giving machines the capability to see. Humans can see the 3D world – the objects, the relationships between them, and a lot of semantics. Then we make decisions for our many activities in the 3D world.

    In order for a robot, a machine, to go around this world, it must also be able to see. It must recognise different objects, estimate their geometry. This has a lot of applications, one of which is self-driving cars. For a car to be able to drive automatically, it must have the ability to see what is in front of it, what obstacles there are, forecast the behaviour of other agents and make planning to drive safely.

    An automated driving system has several parts. One part is about perception – how can a car get knowledge from the environment? Most of this knowledge is visual data from digital cameras and Lidar. Lidar is used to detect 3D objects, and, based on that data, the car can make decisions, like adjusting speed or turning. Our algorithm helps the machine better analyse this data, better understand what is happening around it.

    Another application is medical. We develop AI to automatically analyse medical images to make more informed diagnoses, to make diagnoses more precise and reduce the potential possibility of mistreatment.

    Another exciting area is AI for science. I am collaborating with the Department of Chemistry, we have developed an AI algorithm to improve the resolution of electron microscope images. This can help biologists make discoveries.

    ❓ Automated driving and AI are not new, what does your research contribute to this field? What are your strengths?

    💬 In order to test if an automated car can drive safely, we need a simulation platform. What we are currently doing is building a simulation environment so that we can help train the models and evaluate whether the car can drive safely in a real environment. Do you know (the massively popular computer game) Wukong?

    ❓ Of course!

    💬 The scenes in this game look very real, and the reason is that the developers used Lidar to scan objects, historical buildings especially, in Shanxi province (of China). They did a reconstruction of them and imported them into the virtual environment – the computer game.

    This is very similar to what I am doing. Using such scans, but without relying on expensive Lidar scanning techniques, we reconstruct the world into virtual space, mostly using images shot with a digital camera. We create a completely new reality!

    Another strength is that we are working to make the algorithm run on casually captured data. For example, in Wukong, they needed experts to scan objects and do reconstruction, but what we are doing allows anyone, not only experts, to use their phones to scan. Then we can make algorithms that can reconstruct the scenes.

    ❓ So you reconstruct, or build, a new reality, a virtual word, to train or test automatic cars and robots?

    💬 Yes. We can use Lidar or digital camera scans of a room or a city and turn the real world into a digital space using algorithms. Besides, we also create models that can generate 3D objects, such as tables and chairs. And in this reconstructed or recreated digital world we can train our algorithm and test if it makes mistakes or not.

    ❓ What is the advantage of using the virtual world for training and evaluating algorithms?

    💬 We can get data from interactions – for example a cleaning robot must move a table in the virtual world – and this can then be used to train agents – robots – to interact with the real, physical world. Training in the real world is expensive, and not safe – the robot can break objects, harm humans. But in the virtual world we can produce an infinite amount of data and interactions.

    Besides, we can create what is called corner cases and improve safety. These are cases that happen very rarely in reality, but are critical – for example, two cars colliding. We can create these scenarios and let the car learn what to do.

    ❓ Have you partnered with anyone in the industry?

    💬 We work with APAS (Hong Kong Automotive Platforms and Application Systems R&D Centre, set up by the Hong Kong SAR government), we have a collaborative project in automated driving. It’s a Hong Kong based company with branches in Mainland China. There is also (the car hailing app) Didi. And we have collaborations with Google, TenCent and ByteDance.

    ❓ What is the main difficulty machines have when trying to see the world?

    💬 The variety and the diversity of data within the environment. For example, we are in this room, it is now bright, but when it is dark, or when the weather is different, this creates a lot of challenges for the model, for the machines, to recognise the same objects.

    The (car) camera will capture different video points, under different lighting conditions, weather conditions… all these variations make this problem very complicated for machines, even though for humans it is very easy to interpret objects under different conditions.

    So, in order for a machine to recognise an object properly we must include this object into its training data, and to be robust, the model must have a lot of training data to cover all the potential scenarios. If one is not covered, in the deployment stage there will be a lot of mistakes.

    For example, in the US and Europe cars are different sizes. This also creates difficulties when developing 3D detection models. If the model is trained only on the data collected in the USA, and then you apply it in Europe, it may make mistakes. This is why companies have to develop foundation models, designed to be large in size and take large amounts of data, the assumption is that the data can cover the real world diversity. Chat GPT is a huge model with hundreds of billions of parameters. It is trained on the entire Internet data, but it also makes mistakes.

    ❓ Self-driving cars are already on the road in Mainland China, correct?

    💬 Yes. Such cars are already on the road. Baidu has self-driving cars already in China. I am collaborating with Baidu. In the city of Wuhan, Baidu has a car service called LuoBo KuaiPao. There are no human drivers, but there is a human remote controller that can take over if a challenging scenario happens. One human controller can handle over 20 cars.

    ❓ When do you think self-driving cars will be as common as “normal” cars?

    💬 It is coming. I think it will come in the next few years. The major issue is that humans cannot tolerate any mistakes from AI models. It is big news if a self-driving car makes a mistake, but humans also make mistakes. We need to accept that machines can make mistakes. Humans do, and they make a lot of mistakes! The issue is – how to make humans trust machines? We need human-machine collaboration.

    👏 Thank you, Professor Qi.

  • "What takes us one day, AI can do in a few minutes"

    "What takes us one day, AI can do in a few minutes"

    Professor Moriaki Yasuhara works at the Swire Institute of Marine Science, the School of Biological Sciences. One of his main research interests is paleoecology – the interaction of ancient organisms and their environment. In this interview with our science editor Dr Pavel Toropov, Professor Yasuhara and his PhD student Jiamian Hu explain how AI deep learning tools have transformed their research.

    ❓ Dr Pavel Toropov: could you explain what the direction is of the research done by your laboratory?

    💬 Professor Moriaki Yasuhara: We want to understand the climatic and environmental impact on our planet, especially on marine ecosystems and biodiversity. We are interested in how climate change, global warming, acidification, and oxygen decline effect marine animals.

    Our laboratory focuses on paleobiology. We study marine biology, but over longer time scales, using the fossil record. In contemporary biology, scientists start monitoring after they realise there is a problem. Once they realise there is, for example, pollution, they start monitoring. But we don’t know what the natural environmental conditions were before pollution.

    But, by studying sediment cores and deep time fossil record, we have long time series throughout – before and after. We can go back hundreds of thousands, tens of millions or even hundreds of millions of years.

    ❓ Dr Pavel Toropov: What animals do you use in the fossil record?

    💬 Professor Moriaki Yasuhara: Most animals – fish, jelly fish, worms, marine mammals – don’t have a good fossil record as they have no hard parts, for example shells, that allow for good fossil preservation. Or, they are too large to be abundantly preserved in a small amount of sediments as fossils. So, we need a representative, a surrogate, to make conclusions about the global marine ecosystem.

    One representative is ostracods. They are tiny crustaceans with really nice calcium carbonate shells, and they have some of the best fossil records amongst all crustaceans, arthropods, and metazoans.

    So, by studying ostracods, we can know not only about ostracods themselves, but, using them as a representative, learn about the entire ecosystem, the entire biodiversity.

    ❓ Dr Pavel Toropov: Where do your ostracods come from?

    💬 Professor Moriaki Yasuhara: Mainly from the Cenozoic Era - from 66 million years ago to the present. Some of my students are working on Ordovician samples – from more than 400 million years ago. My research locations include the Arctic, Antarctic, Atlantic Ocean, Indian Ocean, Pacific Ocean, Red Sea, Mediterranean Sea… Hong Kong, Africa.

    ❓ Dr Pavel Toropov: So, to explain your work in simple terms: you get a core sample of the sediment from the bottom of the sea, take out all the tiny ostracods, put them on the microscope slide. Then you identify what species they are. Because different species prefer different conditions, by knowing how the numbers of different species of ostracods changed with time, you can make conclusions on the changes in the entire marine ecosystem, correct?

    💬 Professor Moriaki Yasuhara: Yes.

    ❓ Dr Pavel Toropov: How do you use AI in this?

    💬 Professor Moriaki Yasuhara There are several problems (working with ostracods). First, it is very time consuming – picking, identification, taxonomy. Also, we need expert knowledge. To train one person to be good at ostracod identification and taxonomy takes many years. An entire PhD is probably necessary.

    Recently, I have been working with my PhD student Hugo Jiamian Hu to automate this process by applying AI deep learning. He did a very good job, and now we can scan entire slides automatically, using our digital microscope.

    Hugo used more than 200,000 ostracod specimens for training our AI, and now the AI can do its own automatic identification. Identification is now much faster, and we can use much bigger data.

    💬 Jiamian Hu: Yes, and having a lot of data, big data, means quite something! The 200,000 research-grade, specialist-identified samples ensure that our deep neural network can effectively learn patterns in ostracod identification.

    ❓ Dr Pavel Toropov: How much time does using AI save you?

    💬 Jiamian Hu: We have a PhD student who has about a hundred samples of ostracods from Panama. Before AI, by hand, one by one, it may take her several days to finish a sample. Now, using AI – less than an hour.

    (Shows a microscope slide with ostracods) One ostracod is one white dot on the slide. In one second AI can identify 20 of them. There are several hundred ostracods here on this slide. Identification will take a few minutes for AI, but by eye, depending on the person – one day, or several hours.

    In addition, Professor Yasuhara is not always free, but AI is always free. So, when student has a question about identification, AI can always help.

    ❓ Dr Pavel Toropov: Did you write this deep learning program yourself?

    💬 Jiamian Hu: I wrote it with PyTouch. I built it from scratch, it is specifically designed. I was a computer science student before.

    💬 Professor Yasuhara: Not only is AI more time efficient, by using AI, deep learning, we made exciting discoveries, learned new things. AI can discover errors, misidentifications. AI can give you new questions to answer.

    👏 Dr Pavel Toropov: Thank you both.

  • "We used to wait for a year for results, with AI - a week"

    "We used to wait for a year for results, with AI - a week"

    Professor Ziyang Meng is an acclaimed computational condensed matter physicist and is one of the pioneers of the use of AI in computational physics. His research focuses on developing large-scale numerical and machine-learning simulations to investigate quantum materials. Professor Meng has published more than 100 papers in top journals such as Nature, Nature Physics and PNAS. In this interview with our science editor, Dr Pavel Toropov, Professor Meng explains how AI has revolutionised quantum and computational physics research. He also talks about ancient Greece and mahjong.

    ❓ Dr Pavel Toropov: How do you use AI in your work?

    💬 Professor Ziyang Meng: Quantum materials are very complicated. The existing methods usually face an exponential wall.

    (Note: in a quantum system, each particle can exist in multiple states. As the number of particles increases, the number of possible states increases exponentially, and thus so does the amount of information needed to describe the system and the computational capacity to do so. This increase is known as the exponential wall.)

    The wall is very high! Our computational capacity cannot jump over it. So, we use AI to extract quantum information in a quantum material. This information is used to design better algorithms that help us jump over an exponential wall to look into new materials, new properties.

    For example, we developed what is called the Self-Learning Monte Carlo Method. It is one of the first examples of employing explainable-AI techniques in quantum many-body systems. It helps to open up the field of AI-inspired algorithms for reducing numerical complexity in the computational research of quantum materials.

    My inspiration for developing these AI-related algorithms comes from the (ancient) Greek Delphic maxim “know thyself”.

    ❓ This is a lot of terminology! Before we get to the ancient Greece part, could you explain why quantum materials are complicated?

    💬 The basic ingredients of quantum materials – electrons, billions and billions of them, are subject to mutual quantum mechanical interactions and the complicated chemical, physical and topological environment they live in.

    The full quantum treatment of so many electrons is way beyond paper and pencil. Instead, it requires modern computational techniques and advanced theoretical analyses.

    Brute-force computation cannot solve these difficult problems – because of the exponential wall, one has to approach them with deeper understanding: either applying artificial intelligence or human intelligence. Now with AI, many of the previously impossible simulations – because they required vast amounts of computational power – are becoming possible.

    ❓ Can you give an example?

    💬 The Self-Learning Monte Carlo algorithm. With this algorithm, we first use AI to extract better model parameters at a smaller scale – with few electrons, simulation, and these parameters can more accurately present how the billions of electrons interact with each other inside the material, and how they respond to the experimental conditions such as temperature, electronic or magnetic fields.

    (Note: Monte Carlo simulation allows us to model and solve problems that involve randomness and big data. It is used to handle such situations by testing many possible scenarios.)

    Then we can start the large-scale Quantum Monte Carlo simulation on supercomputers. It is faster than the tradition simulation without the self-learning step that involves AI.

    The self-learning step is crucial. It gives us better and more accurate model parameters, which means that we get to know the properties of the material better. This is what I meant by the Delphic maxim “know thyself”.

    To “Know yourself” means that we must find the most important interactions among these interacting electrons. Self-learning, therefore, is as modern as AI and quantum physics, but also as old as the beginning of human civilisation – ancient Greece.

    ❓ Do you mean the Delphic maxims – the set of moral principles that were inscribed on the Temple of Apollo in ancient Greece?

    💬 Yes, and I even wrote a popular science article about this – “From Delphic Oracle to Self-Learning Monte Carlo”. The article is in Chinese and was published in Physics – a journal of the Chinese Physics Society. http://www.wuli.ac.cn/cn/article/doi/10.7693/wl20170406).

    ❓ So, AI speeds things up by saving quantum physicists a lot of number crunching?

    💬 AI in quantum physics research does not only mean that we can compute faster. AI helps us find better, more accurate, models for the quantum materials. This allows us to better understand the material and also better understand the process of understanding: that’s how we can come across new laws of physics.

    ❓ Quantum physics is not something that most people experience in their daily lives. To non-specialists, such research may sound very abstract, entirely theoretical. What does your research mean in the “real world”? What are the practical applications of your work?

    💬 Our Momentum-Space Monte Carlo self-learning method deals with a new mystery in a quantum material: the magic angle twisted bilayer graphene in which superconductivity has been recently discovered. Graphene is what we have in every pencil! If we can elevate the superconducting temperature from minus 270 Celsius to, say, room temperature, we can solve the global energy crisis.

    Our recent paper on this was awarded the 2024 Top China Cited Paper Award for Physics.

    ❓ How can this solve the energy crisis?

    💬 Using superconductor cables and wires, the electricity, once generated from the power station, will not be lost as heat and dissipate into the air. This is because the electrons in the superconducting state will not experience resistivity as in commonly used conductors such as copper, iron and any other metals. Not experiencing resistivity means electron movement will not be slowed down, converted into heat and lost, and 100% of the generated energy can be used for the intended purpose.

    ❓ In your office you have a physics-themed mahjong set. Why?

    💬 Mahjong is a strategy game. In quantum physics, AI helps the physicist with strategy, allowing the physicist to better understand the problem. I find AI to be a very good partner who helps the physicist solve the mysteries of Mother Nature.

    Mother Nature plays games with us, hiding her secrets behind complicated phenomena, and we need a good partner – like AI, to play this game, and find solutions.

    I am teaching a new undergraduate course at the HKU Physics Department - PHYS3151: Machine Learning in Physics, where you can learn how to use AI techniques to solve problems - from Newtonian mechanics and electromagnetism to quantum phenomena. Everyone is welcome to join!

    https://quantummc.xyz/hku-phys3151-machine-learning-in-physics-2024/
    👏 Thank you, Professor Meng!

  • "AI makes the impossible possible for us"

    "AI makes the impossible possible for us"

    Professor Haibo Jiang works at the Department of Chemistry at the University of Hong Kong and is the director of the multi-disciplinary JC STEM Lab of Molecular Imaging. He received his PhD from the University of Oxford and joined HKU in 2021. In this interview with our science editor Dr Pavel Toropov, Professor Jiang explains the importance of AI in his research.

    ❓ Dr Pavel Toropov: what is the research focus of your lab?

    💬 Professor Haibo Jiang: My lab focuses on the development of new imaging technologies to see inside biological systems at a very small scale. For example, what is happening inside a single cell or a single organelle.

    Our molecular imaging combines different modalities of microscopy – optical, electron and mass spectroscopy imaging. By combining them we can extract information from one sample so that we can understand what is happening in its biology and structure.

    ❓ What are the images used for?

    💬 One example is tracking, with high resolution and very high sensitivity, of drugs in biological systems. We can see when people (and we also use animal models) take the drug, where the drug goes, and how it gets to the target to be therapeutically effective.

    We combine structural information from electron microscopy with chemistry information from mass spectrometry imaging, and then we can then reliably correlate where the drug is – in which organelle, in which cell, in which tissue of which organ. We can also learn why the drug is, or isn’t, effective, and why it causes side-effects.

    ❓ Which drugs are you working with?

    💬 Our system is versatile. We have applied it to understand the traffic of antibiotics. Once in the human body, antibiotics need to get to the bacteria to kill them. We can track a range of different antibiotics to see if they get to the right cells at the infected site. We also applied our methods to study cancer drugs to see where the drug gets into the cell, because this is important for its efficacy.

    ❓ And what does the AI do?

    💬 For us, AI makes the impossible possible! With AI we can achieve high image quality at high speed.

    ❓ Could you explain?

    💬 The biggest problem in imaging is the compromise between image quality and image speed. AI speeds things up and also provides better resolution. Currently, the hardware we have has its limit in spatial resolution, so the improvement has to come from the software – which means AI.

    One of the major limitations of our method is that it is slow. It is the nature of the microscopy techniques that we use. We scan pixel by pixel, and there is a compromise between the quality of the image and the speed of the imaging. If we scan fast, there will be noise and the signal would be low. If we scan one pixel ten times, we can get higher signal and less noise and the quality would be much higher, but it takes longer.

    But using AI, we can improve the electron microscopy speed by more than 10 times, it is faster and more efficient. With AI, we can speed things up by using a lower resolution image, but at a faster rate, and cover a big region of the sample to extract more information from one biological sample.

    ❓ What is the next step for you?

    💬 Life is in 3D, but what we just talked about is 2D. When we get to 3D imaging it is even more challenging!

    There is a technique that allows you to look at 3D structures of cell organelles, but at most, you can do around 200 by 200 by 200 microns, and that’s really small for a tissue sample.

    But scientists dream of seeing a big sample in 3D. For example how the neurones are connected in the human brain. This is not possible with the current technology. Our dream is to have algorithm technology to achieve high speed 3D imaging of large biological samples. We are not there yet, but that is our aim.

    ❓ What is the future of AI in your field?

    💬 I think, in biological imaging, AI will be everywhere. From the imaging itself, to the data analysis. I only started collaborating with AI people after I came to HKU, but I think AI will be the future.

    But AI will not replace what we, humans, do. We need to learn how to employ AI in our research – to do what we do, but better.

  • "AI helps us decode bacteria’s chemical language and harness their weapons"

    "AI helps us decode bacteria’s chemical language and harness their weapons"

    Professor Philip Yongxin Li, from the Department of Chemistry, specialises in chemical biology and drug discovery, with the focus on bioinformatics-guided drug discovery and biosynthesis. In this interview with our science editor, Dr Pavel Toropov, Professor Li talks about how his team uses AI to create new antibiotics.

    ❓ Dr Pavel Toropov: Could you explain your research?

    💬 Professor Yongxin Li: We work on discovering new antibiotics to tackle the problem of antibiotic resistance – superbugs. Because of on-going overuse, the current antibiotics are failing, and superbugs – antibiotic-resistant bacteria – are emerging.

    My job is to learn from Mother Nature. In the natural environment, bacteria use chemicals as weapons, used in competition with other bacteria. These are very intense chemical interactions!

    Our job is to decode this chemical language, make good use of bacteria’s chemical weapons, repurpose them for therapy, and develop them into antibiotics and anti-virals to kill human pathogens.

    But rather than following the traditional way – culturing bacteria, isolating them and identifying chemical compounds that they make, which is time- and labor-consuming – we look at the genetic potential of bacteria, mining the chemicals from bacterial genomes from large datasets.

    Instead of using synthetic chemistry to make new antibiotics, we use synthetic biology to harness their genetic potential for drug discovery. We use cell factories, cell assembly lines to produce chemicals for us. We clone biosynthetic genes [Note: biosynthetic genes are genes that produce complex chemicals, such as those used to kill other bacteria], plug them into a cell’s factory, and let the cell factory build the antibiotic for us.

    ❓ How is AI used in your work?

    💬 We use AI to decode bacteria’s chemical language. Genome information is now available online – more than one million bacterial genomes are available!

    We developed a new methodology called genome mining. AI looks at a million genomes and assesses their genetic potential for coding an antibiotic or an anti-viral molecule.

    💬 The traditional methods analyse the genomes one by one. It is not efficient, and the chance of discovering a new antibiotic is low. So, we train AI to select, from one million genomes that can contain 20 or 30 million biosynthetic genes, the genes that code for antibiotics.

    We use AI to select and prioritise the genes with the highest probability that they code for new antibiotics. Using AI we can also predict the antibiotics’ structure and bioactive potential.

    ❓ How much time does AI save?

    💬 Using the traditional way, the in-silico screening process [Note: “in silico” means biological experiments conducted on a computer, or via a computer simulation, to make predictions about the behavior of different compounds] can screen several thousand biosynthetic genes and narrow them to about 100 for experimental validation. Using AI, we can start with 20 or 30 million genes and evaluate their potential for coding for antibiotics.

    The traditional way can take years. But, using AI, we can finish assay in silica within a few days and even hours. To validate the result, you still need to clone the gene, so we need to use synthetic biology and this part still takes a long time.

    ❓ How close are you to creating a new antibiotic?

    💬 Drug discovery and drug development are a very long process. But with one of our lead compounds, we have finished pre-clinical tests, anti-infection application in vivo, and evaluated ecotoxicity. It is ready for the next step.

    👏 Thank you, Professor Li.

  • HKU Racing at Formula Student 2023

    Project TeamDr Match Wai Lun Ko

    🏎️🏁 @hkuracingteam, the University of Hong Kong ‘s motorsports team, are now in the UK with the racing car they designed and built. On the legendary Silverstone circuit, the HKU team are taking part in @formulastudent motor racing championships, an international event where university motor racing teams compete not only on the racing track, but also in car design and business planning. In 2019, the HKU team came away with two awards, including the first place in design. We are wishing our racers even better results this year!

  • Bryde’s Whale

    🐋Recent sightings of Bryde’s whale feeding close to the shores of Hong Kong surprised Hong Kongers, who think that the whale may have got lost. But the researchers from @The Swire Institute of Marine Science (SWIMS) say that whales belong in the seas of Hong Kong. The latest research by SWIMS has revealed that Hong Kong’s marine biodiversity may be amongst the world’s highest – scientists have already discovered more than 6,000 marine species in Hong Kong, and the number keeps growing.

  • A Day in the Office

    Project TeamDr Shelby Mcllroy / Emily Chei / Róisín Hayden

    👀THE DAILY GRIND! What is a typical work day of a HKU marine biologist? To find out, we followed young researchers from @hkuswims (SWIMS). The team went diving to retrieve sampling structures which had been placed on the seabed to be colonised by marine organisms. This fieldwork is part of MarineGEO – a global project to monitor marine biodiversity. Here, Hong Kong punches above its weight - in a very small area we have more than 6,000 marine species. But these fish, algae, corals and molluscs live alongside millions of people, and understanding how humans affect our marine biodiversity is crucial if we want to ensure its survival.

  • eDNA

    Project TeamDr Shelby Mcllroy / Dr Mathew Seymour / Coşkun Güçlü / Elaine Chan Tsz Ying

    IT’S A KIND OF MAGIC! Environmental DNA (eDNA) is the tiny fragments of DNA that animals have discarded into the environment such as water or soil. HKU scientists can take a small sample of soil or water, and using these microscopic eDNA clues, identify which animals have been present without having to see them or disturb them. Research assistant professor Shelby McIlroy and assistant professor Matthew Seymour explain how eDNA is used by HKU scientists.

  • The Lab Twins

    The Lab Twins

    Project TeamEmily Chei / Róisín Hayden

    UNIQUE TALENTS, SHARED COMMITMENT, FUTURE LEADERS: PhD candidates Roisin Hayden from Ireland and Emily Chei from the USA study corals. They started their PhDs at the same time and have been inseparable ever since. They call themselves "the lab twins", but outside the lab their hobbies are very different - Rosin plays rugby and Emily is a dancer.

    On life as a marine biologist (Roisin): “I look at how algae that live inside coral compete for access to carbon and nitrogen. I first saw coral reefs in 2018 in Thailand and realised - this is what I want to do! No day is ever the same: I can be diving, working in the lab, or writing. But corals are dying faster than we can learn about them and save them.”

    (Emily): “I study how different coral species acquire nutrients and how that affects their resilience to environmental stressors. I love the experiences – I lived on research boats, explored submerged oil rigs, snorkelled with dolphins. I learned not only laboratory skills, but transferrable ones, like communications.”

    On women in science (Roisin): “There are many women in science early in their careers, but in positions of power there are few women, and this bothers me! There is long way to go before we have equality in science.”

    (Emily): Women have difficulty making it to higher positions. In science we need women mentors to lead other women.”

  • Clever Fish

    Project TeamDr Celia Schunter / Debora Desantis / Daniele Romeo

    CLEVER FISH AND GLOBAL WARMING. Temperature records for the oceans keep being broken, and global warming is changing and damaging marine ecosystems. Marine animals are affected by rising temperatures in ways that we are just starting to understand. By tracking the changes in the brain and genes, Dr Celia Schunter’s team at the School of Biological Sciences are learning how the higher temperatures are changing the behaviour of a fish called cleaner wrasse. Cleaner wrasses are highly intelligent fish that can recognise themselves in the mirror and are very important for the health of coral reefs because they eat parasites and dead tissues from other fish. With higher temperatures, however, wrasses spend less time cleaning other fish, which then affects the entire coral ecosystem.

  • Digital Twins

    Project TeamDr H.H. Cheung

    NON-IDENTICAL TWINS! Cyber-physical systems link reality with its digital representation, allowing objects in the real world to be manipulated remotely on the computer. This technology has the potential to transform a large range of professions and industries, such as medicine, agriculture, logistics, construction and engineering. The team led by Dr H.H. Cheung at HKU’s Department of Industrial and Manufacturing Systems Engineering have developed such a cyber - physical system called Digital Twin. Meet the prototype.

  • Smart Elderly Walker

    Project TeamDr Wen Rongwei / Zhao Chongyu

    SCIENCE SERVING THE SOCIETY. Hong Kong has the longest life expectancy in the world. Over a quarter of all Hong Kongers are aged between 60 and 80, and more than 300,000 are over 80. Being able to move independently, preserving their dignity, is vital for many.

    The team led by Dr Chuan Wu at the Department of Computer Science built a Smart Elderly Walker to help. This walker is designed for Hong Kong’s realities – many elderly live in very small apartments, and so the walker was made foldable. Because Hong Kongers usually shop at outdoor markets, the walker was designed to operate in the outdoor environment.

    Technologically, the Smart Walker is pioneering. It can be operated hands-free, without pushing, because the walker monitors the user’s movement using LiDAR and always stays in front of the user – a unique feature called front-following. In an emergency, the user can grab the walker to prevent a fall, and special sensors immediately engage the brakes. Moreover, the walker can recognize the user’s voice and be called over, a feature made possible by a voice recognition AI tool.

    This year, HKU’s Smart Elderly Walker won a bronze medal at the International Exhibition of Inventions in Geneva, presented by its designers - post-doctoral researcher Dr Wen Rongwei and PhD student Zhao Chongyu.

  • Dancing Robots

    Project TeamDepartment of Industrial and Manufacturing Systems Engineering

    DANCING LIKE A ROBOT? Robotics is one of the most important technologies in the modern world. Undergraduate students at the Department of Industrial and Manufacturing Systems Engineering build robots with functions as varied as sorting library books, killing viruses in the air, patrolling streets and rescuing people in the mountains. But it turns out that these robots also have rhythm!