I am currently travelling and researching in Norway and its beauty is endless and breath-taking. Norway is a rather narrow country in northern Europe and shares the Scandinavian Peninsula with Sweden and Finland. Famous for its fjords, sea inlets between steep cliffs, it has many delightful tidbits of history that few likely know of like: the Nobel Peace Prize is awarded in Oslo every year, it has the longest world’s tunnel, and Norway introduced salmon sushi to Japan. Since arriving on a Regent cruise ship, I have been sampling many traditional foods, especially the Norwegian meatballs, not to be mistaken for the Swedish meatballs.
Norway is one of the wealthiest countries in the world and has a very high standard of living compared with other European countries, and a strongly integrated welfare system. Norway’s modern manufacturing and welfare system relies on a financial reserve produced from natural resources, particularly North Sea oil. In terms of GDP per capita Norway is ranked number 8 in the world. With little over 5 million residents, the country has developed a very modernized infrastructure, especially with the internet.
At an average speed of 52.6 Mbps, Norway has the fastest mobile internet connection in the world, followed by the Netherlands, Hungary, Singapore and Malta. With this high speed infrastructure, the country is well positioned to have a leading voice on AI.
The Government’s goal is that investments in artificial intelligence within research, research-based innovation and development should be concentrated on strong research communities where cooperation between academia and industry is central, such as in the centres of excellence and in the centres for research-based innovation.
The country has been consistently forward thinking and in 2018 a consortium, called NORA, was set up to strengthen Norwegian research and education in artificial intelligence, machine learning, robotics and related disciplines. The consortium comprises of Norwegian universities and research institutions engaged in research and education in artificial intelligence: the University of Agder, the Arctic University of Norway, OsloMet, the University of Bergen, the Norwegian University of Life Sciences, Simula Research Laboratory AS, the University of Stavanger, NORCE and the University of Oslo.
Norway is known for the high level of trust it places in both its public and private institutions. The Norwegian government believes that:
- artificial intelligence that is developed and used in Norway should be built on ethical principles and respect human rights and democracy;
- research, development and use of artificial intelligence in Norway should promote responsible and trustworthy AI;
- development and use of AI in Norway should safeguard the integrity and privacy of the individual;
- cyber security should be built into the development, operation and administration of AI solutions; and
- supervisory authorities should oversee that AI systems in their areas of supervision are operated in accordance with the principles for responsible and trustworthy use of AI
The Norwegian Data Protection Authority has published a guide on artificial intelligence and privacy which covers this and other issues. They put a very early emphasis in 2018 on the importance of metadata, describing the content of the different data fields, and reinforced the stewardship that organizations put their own house in order, meaning that they gain an overview of what data they manage, what the data means, what it is used for, what processes it is used in, and whether legal authority exists for sharing it.
They were even clearer on selection bias, which occurs if datasets only contain information about part of the relevant source data. If an algorithm that is meant to recognize images of dogs is only trained using images of dogs playing with balls, the algorithm may reason that it cannot be a picture of a dog if no ball appears in the image. Similarly, it is problematic if an algorithm meant for facial recognition is trained on images of faces from a single ethnic group.
Bias can occur for other reasons; for example, a training dataset for supervised learning may contain bias resulting from human misjudgements or historical bias in the source data (on account of, for example, the conventional view of men as holders of certain types of positions, or if the data contains more images of women than men by a kitchen sink). Artificial intelligence can also be influenced by who defines the problems.
They were early leaders in communicating the lack of transparency in deep learning AI solutions, where some deep learning algorithms can be likened to a ‘black box’, where one has no access to the model that can explain why a given input value produces a given outcome.
The World Economic Forum as early as 2017 characterized AI as one of the emerging technologies with the greatest potential benefits but also the greatest risks. The EU now has formal guidelines to promote responsible and sustainable development and use of artificial intelligence in Europe.
For development and use of AI to be defined as trustworthy, the European Commission’s high-level expert group believes that it must be lawful, ethical and robust. On this basis, the expert group has proposed seven principles for ethical and responsible development of artificial intelligence. The Government will adopt these principles as its basis for responsible development and use of artificial intelligence in Norway.
Recently released the EU proposed legislation on artificial intelligence, many leaders are very concerned that the progressive laws could hurt the bloc’s competitiveness and spur an exodus of investment. In an open letter sent to EU lawmakers Friday, C-suite executives from companies including Siemens (SIEGY), Carrefour (CRERF), Renault (RNLSY) and Airbus (EADSF) raised “serious concerns” about the EU AI Act, the world’s first comprehensive AI rules.
Other prominent signatories include big names in tech, such as Yann LeCun, chief AI scientist of Meta (FB), and Hermann Hauser, founder of British chipmaker ARM.
“In our assessment, the draft legislation would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of more than 160 executives said in the letter.
These prominent leaders argue that the draft rules go too far, especially in regulating generative AI and foundation models, the technology behind popular platforms such as ChatGPT.
On the other hand, only a few months ago, hundreds of experts warned about the risk of human extinction from AI, saying mitigating that possibility “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Where Norway stands on the EU’s AI legislation policy is strongly aligned and supportive of its positioning, as it is a country that has been focused on data protection and privacy and setting early guardrails to guide its country foreward. It understands the importance of maintaining a strong position in AI for its future generations.
Its beauty is endless and its future is bright, with so many strong AI foundations, I can hardly wait to do more research on their AI leaders. I think a good start is Klas Pettersen, so stay tuned, who is leading Nora.ai.
Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission (2019): Ethics Guidelines for Trustworthy AI9.
Ministry of Local Government and Modernisation: Publication no.: H-2458 EN