close
close

Latest Post

Will a stock split help MicroStrategy’s share price rise in the second half of the year? Five chilling words a Secret Service agent uttered after finding Trump shooter dead on roof
Women in AI: Sarah Bitamazire helps companies implement responsible AI

To give academics and other women focused on AI their well-deserved – and long overdue – time in the spotlight, TechCrunch is launching an interview series focusing on notable women who have contributed to the AI ​​revolution.

Sarah Bitamazire is Chief Policy Officer at boutique consultancy Lumiera, where she also helps write the Lumiera Loop newsletter, which focuses on AI literacy and the responsible use of AI.

Previously, she worked as a political advisor in Sweden, focusing on gender equality, foreign policy and security and defence policy.

How did you get into AI? What attracted you to this field?

AI found me! AI is having an ever-increasing impact in the areas I am closely involved in. In order to provide sound advice to senior decision makers, it was essential for me to understand the value of AI and its challenges.

First, in the field of defense and security, where AI is used in research and development and in active warfare. Second, in the field of arts and culture, creators were among the groups that first recognized the added value of AI, but also the challenges. They helped to bring to light the copyright issues that had come to the surface, such as the ongoing case in which several daily newspapers are suing OpenAI.

You know something is having a huge impact when leaders from very different backgrounds and problem areas are increasingly asking their advisors, “Can you tell me about this? Everyone is talking about it.”

What work in AI are you most proud of?

We recently worked with a client who had attempted to integrate AI into their R&D workflows but was unable to do so. Lumiera developed an AI integration strategy with a roadmap tailored to the client’s specific needs and challenges. The combination of a curated AI project portfolio, a structured change management process, and leadership that recognized the value of multidisciplinary thinking made this project a great success.

How do you overcome the challenges of the male-dominated technology industry and, more broadly, the male-dominated AI industry?

By clarifying the why. I am actively involved in the AI ​​industry because there is a deeper meaning and a problem that needs to be solved. Lumiera’s mission is to provide comprehensive guidance to leaders so they can make responsible decisions with confidence in a technological age. This sense of purpose remains the same no matter what field we are in. Male-dominated or not, the AI ​​industry is vast and increasingly complex. No one can see the bigger picture, and we need more perspectives so we can learn from each other. The challenges that exist are huge, and we all need to work together.

What advice would you give to women who want to enter the AI ​​field?

Getting involved with AI is like learning a new language or new skills. AI has enormous potential to solve challenges across different sectors. What problem do you want to solve? Find out how AI can be a solution and then focus on solving that problem. Keep learning and connecting with people who inspire you.

What are the most pressing problems facing AI as it advances?

The rapid pace at which AI is evolving is a problem in itself. I believe it is important to ask this question often and regularly in order to move with integrity in the AI ​​space. At Lumiera, we do this every week in our newsletter.

Here are some of the things that concern us most at the moment:

  • AI hardware and geopolitics: Public sector investment in AI hardware (GPUs) is most likely to increase as governments worldwide deepen their AI knowledge and make strategic and geopolitical moves. So far, there is activity from countries such as the UK, Japan, the United Arab Emirates and Saudi Arabia. This is an area to keep an eye on.
  • AI benchmarks: As we rely more and more on AI, it’s important to understand how we measure and compare its performance. Choosing the right model for a specific use case requires careful consideration. The best model for your needs may not necessarily be the one at the top of a leaderboard. Because models change so quickly, the accuracy of benchmarks also fluctuates.
  • Balance between automation and human supervision: Believe it or not, over-automation is a problem. Decisions require human judgment, intuition and understanding of context. This cannot be reproduced by automation.
  • Data quality and governance: Where is the good data?! Data flows in, through, and out of organizations every second. If that data is poorly managed, your organization will not benefit from AI. And in the long run, it could be detrimental. Your data strategy is your AI strategy. Data system architecture, management, and ownership must be part of the discussion.

What problems should AI users be aware of?

  • Algorithms and data are not perfect: As a user, it is important to be critical and not blindly trust the results, especially when using off-the-shelf technology. The technology and tools on the basis are new and evolving, so keep this in mind and use common sense.
  • power consumption: The computational effort required to train large AI models combined with the energy required to operate and cool the necessary hardware infrastructure results in high power consumption. Gartner predicts that AI could consume up to 3.5% of global electricity by 2030.
  • Inform yourself and use different sources: AI literacy is key! To make meaningful use of AI in your life and work, you need to be able to make informed decisions about how to use it. AI should support your decision-making, not make the decision for you.
  • Perspective density: You need to involve people who know their problem space really well to understand what kinds of solutions can be created with AI, throughout the entire AI development lifecycle.
  • The same applies to ethics: It is not something that can simply be tacked on to an AI product once it is built – ethical considerations must be incorporated early and throughout the development process, starting in the research phase. This is done by conducting social and ethical impact assessments, mitigating biases, and promoting accountability and transparency.

When building AI, it is important to recognize an organization’s limited capabilities. Gaps are opportunities for growth: they allow you to prioritize areas where you need outside expertise and develop robust accountability mechanisms. Factors such as current skills, team capacity, and available financial resources should all be assessed. These and other factors will influence your AI roadmap.

How can investors better promote more responsible AI?

As an investor, you first want to make sure that your investment is sound and will last for the long term. Investing in responsible AI simply secures financial returns and mitigates risks related to trust, regulation and data protection, for example.

Investors can advocate for responsible AI by looking at indicators of responsible AI governance and use. A clear AI strategy, dedicated resources for responsible AI, published responsible AI policies, strong governance practices, and the integration of human feedback are factors to consider. These indicators should be part of a robust due diligence process. More science, less subjective decision-making. Moving away from unethical AI practices is another way to encourage responsible AI solutions.

Leave a Reply

Your email address will not be published. Required fields are marked *