The use of artificial intelligence (AI) in the absence of a strategy or a regulation exposes the citizens to the risk of potential abuse. The Republic of North Macedonia established a working group for the National Strategy for Artificial Intelligence, however only one meeting has been held since 2021, writes Portalb.mk.
The presentation of the robot Chat GPT, which has the ability to speak like a human with a high degree of fluency and coherence, drew the world’s attention to the use of artificial intelligence in the service of man, but also to the possible threats it could pose.
Portalb.mk, within the ICEDA project, talked to experts and representatives of the institutions in the Republic of North Macedonia about the readiness of our country for the use of AI in the public sector, whether we have the relevant digital infrastructure and what are the possible risks of the use of artificial intelligence in the absence of a National Strategy.
The country does not have a strategy for the use of artificial intelligence, and its use can be dangerous
So far, none of the state institutions in North Macedonia has used artificial intelligence.
On the initiative of the Fund for Innovation and Technological Development (FITR) and the Deputy Prime Minister in charge of Economic Affairs, a working group was formed, whose goal is to create the first National Strategy for Artificial Intelligence in the Republic of North Macedonia. The strategy is part of the economic development plan of the Republic of North Macedonia and part of the National Development Strategy 2021-2041. However, since its establishment, this group has met only once.
The representatives of the Fund for Innovation and Technology Development stated that “In the further steps, the FITR in communication with UNDP, the World Bank as well as other stakeholders who have the experience and knowledge in compiling the National Strategy for Artificial Intelligence, will work on providing a management expert/consultant or a team, that will help in the process on compiling this strategy with the support of the working group as well as in the process of creating the financial structure for the realization of the whole process”, claiming that it is a complex process that requires significant human resources and time.
The Personal Data Protection Agency informed Portalb.mk that artificial intelligence algorithms are increasingly making important decisions about people’s lives without any oversight or responsibility, and when applied inappropriately or irresponsibly, artificial intelligence can result in job loss, prejudice or racist attitudes, etc.
The representatives of the Personal Data Protection Agency stated that “For this reason, review and standard-setting initiatives are motivated by concerns about the potential misuse or unintended consequences of artificial intelligence. Artificial intelligence technology has the potential to improve our lives, our individual and societal safety and security, however this should not come to the detriment of our fundamental values and rights. The Personal Data Protection Agency, following EU guidelines, monitors the development and use of artificial intelligence technologies in our country and strives towards a balanced approach between the speed of technological changes and the possible challenges to the personal data protection rights, as one of the basic human rights”.
Experts assess that the lack of a strategy or a legal regulation on the use of artificial intelligence exposes the citizens to the risk of abuse, therefore the country must take steps to protect the citizens.
The members of the civic association “Konekt”, whose representative is part of the working group for the Artificial Intelligence Strategy, indicate that we should be concerned about the issue of privacy, oversight, the implementation of the risk-based principle, but also respect for human rights. According to them, if these issues are not considered, it is estimated that there may be major ethical challenges in the development of the systems managed by artificial intelligence.
“Considering the fact that an act on artificial intelligence is currently being prepared at the EU level, and the Republic of North Macedonia, as a candidate country for EU membership, is obliged to transpose the regulations and directives that are mandatory in the EU, it may be better to start adopting the national strategy. What we can do in the meantime is use the existing working group for the National Strategy for Artificial Intelligence and form a state attitude on the Convention on Artificial Intelligence, Human Rights, Democracy and Rule of Law”, it was stated from Konekt.
Regarding the risk of personal data misuse by AI, the members of Konekt add that the Republic of North Macedonia is fully compliant with European regulations and has a mechanism for their protection, however a risk arises if the population is not sufficiently educated about the entire scope of personal data, how to protect the data and the mechanisms at their disposal if they are misused.
The members of Konekt stated that “The risk of personal data misuse arises from the people themselves, not from artificial intelligence”.
The university professor Dr. Petre Lameski assesses that the lack of strategy in the country and the lack of visible regulation exposes the citizens to the risk of abuse, and the development of institutions that delay the implementation of AI in their operation is even more impeded. According to the professor, the problem for the country is that software applications that intensively use artificial intelligence are made and offered by companies based in other countries and our data is processed and used there, therefore third parties benefit from the use of artificial intelligence.
“The adoption of an AI strategy is something that should have been done already, however it is not too late. The goal of the strategy, in my opinion, should be focused on developing state capacities and subsidizing companies for the introduction of artificial intelligence in their operation and utilization of benefits, but also the introduction of mechanisms for supervision and control that could regulate the consequences of its use, especially the economic ones. There is already a global race in the development of artificial intelligence in all fields. Macedonia has capacities that, if properly used, will enable the country to rank high in the world in terms of accepting the implementation of artificial intelligence. In that regard, it is necessary to carefully implement the regulatory mechanisms proposed by the larger countries, especially the EU, and to consider state interests and the potential for development offered by AI”, says Professor Lameski.
Regarding data misuse when using AI, professor Lameski says that the Republic of North Macedonia has harmonized the data protection law with that of the EU. However, according to him, AI can be counterproductive, even harmful, if the decisions it approves are taken for granted and no further evaluation is made. As the professor says, there are already developed methods that can be applied in practice to maintain the quality of the applications that use artificial intelligence and to reduce and eliminate the potential damage it can cause.
Dr. Mentar Mahmudi, who works as a senior scientist in applied sciences at Amazon, listed for Portalb.mk the potential consequences of using artificial intelligence without a strategy or legal provisions.
- Violation of human rights: The use of artificial intelligence without rules and oversight mechanisms can violate basic human rights, such as the right to privacy, personal freedom and rights related to discrimination.
- Spreading inequality: AI algorithms can be used to develop systems that create inequality towards different groups of people, including minorities and discriminated groups.
- Increasing economic inequality: If AI is used for replacement of human workers, it could lead to increased economic inequality and unemployment.
- Increased risk of cybercrime: If AI systems are not secure, they may be vulnerable to attacks by cybercriminals, leading to loss of information and breach of privacy.
- Control and safety: In cases of loss of control over artificial intelligence, it can cause dangerous consequences for the people and the environment, e.g. pilotless aircraft automation can cause dangerous accidents.
- Transparency and accountability: if the AI is not transparent and clear, it can be difficult to monitor and analyze whether it has made the right choice and whether it is fair.
According to Dr. Mahmudie, in order to minimize these risks, it is important for the countries and private organizations to use artificial intelligence wisely and with clear goals. This may include developing laws and regulations to regulate its use, as well as ongoing oversight of its applications.
Regarding the personal data misuse, Dr. Mentar Mahmudi says that it can be dangerous, because this technology can quickly and efficiently collect, process and analyze big data. If this technology is used unsustainably or without adherence to the privacy rights of individuals, it can have serious consequences for them.
According to him, it is important for institutions and companies to be aware of the risk of personal data misuse when using artificial intelligence and to take the necessary measures to protect the privacy of the individuals. He also adds that the country and the relevant institutions should necessarily invest in training and provision of information to specialists and the public about the risk of personal data misuse when using artificial intelligence and to promote the development of a culture of privacy and personal data protection.
This research story was prepared as part of the project “Increasing Civic Engagement in the Digital Agenda – ICEDA”, co-financed by the European Union and implemented by the Metamorphosis Foundation (North Macedonia), the e-Governance Academy (Estonia), the Dosta! Campaign (Albania), Partners for Democratic Change Serbia (Serbia), NGO 35mm (Montenegro), and Open Data Kosovo (Kosovo).
This research story was prepared with the financial support of the European Union. The content of the research story is the sole responsibility of the Metamorphosis Foundation and the author and in no way reflects the views of the European Union.