Artificial Intelligence for Business and Society: Challenges, Opportunities, and Ethics
In a recent LRT radio show, Aistis Raudys, CEO of AAI Labs, discussed the latest AI trends, challenges, and ethical issues, emphasizing why it is important for both technology users and businesses to understand the dynamics of this rapidly evolving field.
AI advancements are multifaceted, with global attention focused on GenAI, as innovators seek to make AI capable of autonomous learning.
Global attention is largely focused on language models such as ChatGPT, which can generate text, work with images, analyze videos, and even process voices. Increasingly, innovators are seeking ways for AI to learn independently—so AI can develop itself without relying solely on human-created data. Another key area is AI’s ability to write code, helping specialists solve complex problems more efficiently.
One of the biggest challenges in AI development is the shortage of qualified human resources. AI can learn much faster than humans can create training data. That’s why new solutions are being developed so AI can learn independently, grow efficiently, and reach higher levels of intellectual performance.
While the EU is advancing its regulatory efforts to make AI safe, some major tech giants show signs of reluctance to comply.
The European Union has introduced an AI Code of Conduct, grouping AI applications according to their risk level—from strictly prohibited uses (like mass surveillance or biometric data collection) to low-risk scenarios with minimal regulation. The code helps companies navigate responsible AI deployment in various contexts.
It’s important to note that some major companies, such as Meta (which owns Facebook and Instagram), have so far refused to sign onto the code. Their argument is that the code is voluntary, its restrictions could slow AI development, and the final official version of EU AI law is still pending (expected in 2027). Other technology giants, like Microsoft and OpenAI, have already signed, presenting themselves as more ethical companies that value user data protection. However, these companies face additional restrictions that could slow AI progress.
For ordinary users, the reluctance of players, such as Meta, to comply means user data could be used to train AI models, raising moral questions. You can protect your personal data simply by sharing less content or by closing your account altogether.
The current challenge is to balance the protection of human rights with the intensifying competition in AI.
It’s a challenge to balance human rights and rapid AI progress. More regulation slows development but strengthens security; less regulation allows for faster innovation but increases risks to personal data. In Europe, a trend may develop: acquiring AI solutions from other countries where development is faster.
Specialists who use AI will replace the specialists who don’t.
In some sectors, AI is already capable of replacing junior-level specialists. But the greater impact is not AI itself, but people who know how to use it. Those, who become more productive and competitive. The benefits of AI are particularly evident for knowledge workers: automating tasks, speeding up communication, and assisting with management and analytics.
Changes in the AI field do not happen overnight. Implementing new technologies takes investment and time, so changes in the job market and professions will be gradual.
AI is a powerful aid for daily task automation, but its output should always be taken with a grain of salt.
AI is valuable not only at work but also in everyday activities—generating text, brainstorming ideas, automating tasks. ChatGPT and other models are practical assistants, helping solve daily challenges.
Yet, AI makes mistakes. It’s important to use AI critically, double-checking its suggestions and information. The latest models allow you to seek answers online and encourage users to continually verify whether results are accurate.
Contact our team today, and let’s find the best approach together!