Innovative AI - Artificial Intelligence (AI) and its impact on businesses.
The EU AI Act and Beyond: Overview of the Landscape of AI Regulation
This week in Innovative AI
Innovative AI Shorts:
Ensuring AI's Health: Defending Against Cyberattacks in the Era of Artificial Intelligence
The Dark Shadow of Tuskegee: Healing Trust in Healthcare and AI
From Data to Innovation: Generative AI's Potential in Healthcare
Executives’ Highlight: EU Regulation: The EU AI Act and Beyond: Overview of the Landscape of AI Regulation
Prompts for your Organization: Midjourney for designing working areas; ChatGPT for team-building activities
Overview: Center for Deep Tech Innovation Events
Tools to try out: Beautiful.ai
Innovative AI Shorts
Ensuring AI's Health: Defending Against Cyberattacks in the Era of Artificial Intelligence
Artificial Intelligence (AI) has become an integral part of our daily lives, aiding us in various tasks. However, the increased reliance on AI also brings along a new set of challenges, with cyberattacks being one of them. This article by Jai Infoway sheds light on the importance of protecting AI systems in healthcare from potential threats. It emphasizes the need for robust security measures and continuous monitoring to identify and prevent any potential cyber threats. With the healthcare sector relying heavily on AI to enhance patient care and streamline operations, safeguarding these systems is crucial for the safety and privacy of patients.
The Dark Shadow of Tuskegee: Healing Trust in Healthcare and AI
The legacy of the Tuskegee Syphilis Study serves as a stark reminder of the enduring consequences of injustice in healthcare and research. This unethical experiment, conducted between 1932 and 1972, shattered trust in medical research and healthcare among African Americans/Blacks, and continues to contribute to health disparities. Racial minorities are less likely to participate in clinical trials, limiting medical advancements and perpetuating health inequalities. The mistrust also affects vaccine acceptance, as seen during the COVID-19 pandemic. As we explore the potential of AI in healthcare, it is crucial to address biases and ensure equitable, unbiased, and trustworthy systems that serve everyone. By learning from history and promoting ethical research practices, we can work towards rebuilding trust and achieving equitable healthcare for all.
From Data to Innovation: Generative AI's Potential in Healthcare
Generative AI, a branch of artificial intelligence that can produce creative outputs, has the potential to revolutionize the healthcare field, according to a recent report. This technology, which can create unique and tailored outputs based on given data, is applicable to a wide range of healthcare use cases. From drug discovery and genetic research to patient diagnosis and personalized treatment plans, generative AI can offer innovative solutions and improve healthcare outcomes. With its ability to augment human capabilities and speed up processes, this advancement in AI is set to transform the healthcare landscape and enhance patient care.
EU Regulation: The EU AI Act and Beyond - Overview of the Landscape of AI Regulation
In an era where artificial intelligence (AI) is reshaping the contours of society, the European Union (EU) is poised to make history. As Dragoș Tudorache, Member of the European Parliament and Vice-President of the Renew Europe Group, aptly put it, "Artificial intelligence does have a profound impact on everything we do and therefore it was time to bring in some safeguards and guardrails on how this technology will evolve for the benefit of our citizens." (The Guardian, 2023) With that being said the EU is actively calling to action by developing the so-called EU AI Act, a regulatory framework we will have a closer look at in the following.
Overview: The EU's Approach to AI Regulation
The EU AI Act, often dubbed the "GDPR for AI", is a testament to the EU's commitment to ensuring that AI evolves in harmony with human rights and societal values. It uses a risk-based approach to ensure that AI systems like voice-activated toys promoting dangerous behavior or AI-based social scoring systems are outright banned due to their unacceptable risks, thereby increasing awareness of the importance of responsible AI among businesses, regulators, and the wider public.
The AI Act's deliberations on AI-powered live facial recognition exemplify the challenges of balancing security with privacy.
Here, the EU draft regulation was updated this year to categorize AI models and tools into "high" risk or simply "unacceptable." AI tools and uses deemed "unacceptable" will, therefore, directly be banned in the EU. This includes "remote biometric identification systems," or facial-recognition technology; "social scoring," or categorizing people based on economic class and personal characteristics; and "cognitive-behavioral manipulation," such as voice-activated AI-powered toys.
EU AI Act's Stance on Generative AI Models
Generative AI models fall within the EU AI Act under the broader category of foundation models - being models that are trained on vast and diverse datasets for a wide range of outputs. The Act mandates providers of generative AI to implement state-of-the-art safeguards against producing content that violates EU laws and to transparently disclose the use of copyrighted training data. Furthermore, there's an emphasis on transparency, especially when generative AI is used to create manipulative content, such as "deep fakes."
Beyond these specific obligations, generative AI systems, being a subset of foundation models, must also adhere to broader obligations set for foundation models, including risk mitigation, unbiased dataset usage, and energy efficiency. The Act also outlines stringent compliance monitoring mechanisms, with substantial fines for non-compliance, ranging up to 7% of the total worldwide annual turnover or EUR 40 million, whichever is higher.
However, as discussions between the European Parliament and the European Council continue, the final legislative text may undergo changes, reflecting the dynamic nature of AI developments.
Global Implications of the EU AI Act: US, UK, Brazil, and Beyond
Now that we have taken a look at the EU AI Act, let’s have a look at the implications this initiative by the EU might have on further jurisdictions.
The US's Calculated Approach
The US are behind the EU when it comes to regulating AI. Last month, the White House said it was "developing an executive order" on the technology and would pursue "bipartisan regulation." While the White House has been actively seeking advice from industry experts, the Senate has convened one hearing and one closed-door "AI forum" with leaders from major tech companies.
Neither event resulted in much action, despite Mark Zuckerberg being confronted during the forum with the fact that Meta's Llama 2 model gave a detailed guide for making anthrax. Still, American lawmakers say they're committed to some form of AI regulation. "Make no mistake, there will be regulation," Sen. Richard Blumenthal said during the hearing.
The UK's Aspirational Stance
The UK, meanwhile, wants to become an "AI superpower," a March paper from its Department for Science, Innovation and Technology said. While the government body has created a regulatory sandbox for AI, the UK has no immediate intention of introducing any legislation to oversee it. Instead, it intends to assess AI as it progresses. "By rushing to legislate too early, we would risk placing undue burdens on businesses," Michelle Donelan, the secretary of state for science, innovation, and technology, said.
Brazil's Human Rights-Centric Approach
In a draft legislation update earlier this year, Brazil looked to take a similar approach to the EU in categorizing AI tools and uses by "high" or "excessive" risk and to ban those found to be in the latter category. The proposed law was described by the tech-advisory firm Access Partnership as having a "robust human rights" focus while outlining a "strict liability regime." With the legislation, Brazil would hold creators of an LLM liable for harm caused by any AI system deemed high risk.
China's Restrictive Stance
China, despite its widespread usage of tech such as facial recognition for government surveillance, has enacted rules on recommendation algorithms and "deep synthesis" tech. Now it's looking to regulate generative AI. One of the most notable rules proposed in draft legislation would mandate that any LLM, and its training data, be "true and accurate." That one requirement could be enough to keep consumer-level generative AI out of China almost entirely - especially if you understand the “logic” of how (gen) AI works.
Our stand on the EU AI Act
Our Perspective on the EU AI Act The EU AI Act represents a significant stride in establishing regulatory frameworks for artificial intelligence, much akin to MiCAR in the blockchain/crypto space. We firmly believe that the early implementation of sound regulations plays a pivotal role in fostering economic growth within this sector. Thus, it is up to experts, the public, and politicians to collaboratively ensure the formulation of judicious rules that facilitate business development and societal advancement across the European Union.
Prompts for your Organization
ChatGPT for team-building activities
“List 3 ideas for team-building activities in the Berlin area in the summer. Team of 25 people. Whole day. Include approximate cost and time estimates.”
Midjourney for designing working areas
”a vibrant and bustling creative working area within a modern company. The space is filled with natural light streaming through large, floor-to-ceiling windows. The walls are adorned with colorful, inspirational artwork. There's a mix of open, collaborative workstations and cozy, nooks with comfortable seating for focused work. The center of the room hosts a communal worktable, where team members gather for lively discussions and collaboration.”
Overview: Center for Deep Tech Innovation Events
As some of our readers might know, we offer a series of webinars on the topic of AI (and other technologies in the near future).
Below you find a list of upcoming webinars:
Wednesday, November 8, 2023, 4:00 p.m. (CET) Blended Outsourcing in Software: How to get the most value for your money in times of skills shortage.
Register now here
Wednesday, November 29, 2023, 4:00 p.m. (CET) ChatGPT Beyond the hype - Embeddings RAGs & on-premise LLMs for Innovators
Register now here
Wednesday, December 6, 2023, 4:00 p.m. (CET) Prompt Engineering for Business Transformation: A Workshop.
Register now here
For more content, feel free to visit our YouTube channel and LinkedIn
Tools to try out: Beautiful.ai
Beautiful.ai is a web-based presentation tool that leverages artificial intelligence to simplify the design process for users. Instead of starting with a blank canvas, it provides smart templates that automatically adjust to the user's content, handling aspects like fonts, colors, and layouts. With over 1 million active users, it caters to professionals across various fields. The platform offers features like real-time collaboration, PowerPoint and PDF export options, and an AI-powered bot named "DesignerBot" for rapid presentation creation.
Eventually, we come to the end of this newsletter. Stay tuned for the upcoming editions, where we will keep informing you about the latest developments and applications of AI for business and beyond.
ChatGPT for team-building activities
“List 3 ideas for team-building activities in the Berlin area in the summer. Team of 25 people. Whole day. Include approximate cost and time estimates.”
Midjourney for designing working areas
”a vibrant and bustling creative working area within a modern company. The space is filled with natural light streaming through large, floor-to-ceiling windows. The walls are adorned with colorful, inspirational artwork. There's a mix of open, collaborative workstations and cozy, nooks with comfortable seating for focused work. The center of the room hosts a communal worktable, where team members gather for lively discussions and collaboration.”