close
close

Sicherheitslücken in KI- and ML-Open Source Models: Maintain AI-Bericht

Sicherheitslücken in KI- and ML-Open Source Models: Maintain AI-Bericht

ChuanhuChatGPT, Lunary and LocalAI
Schwere Sicherheitslücken on KI-Sprach models

Anibieter zoom Thema

Large Language Models can be routinely adopted and used reliably. In Open Source Tools, take advantage of Bug Bounty Programs from Protection AI’s Kritische Sicherheitslücken.

In the Open Source Master ChuanhuChatGPT, Lunary and LocalAI find sich schwerwiegende Sicherheitslücken.(Image: Dall-E / KI-generiert)
In the Open Source Master ChuanhuChatGPT, Lunary and LocalAI find a wide variety of Sicherheitslücken.

(Image: Dall-E / KI-genierert)

Sicherheitslücken wurden in 34 Different Open Source Models Künstliche Intelligenz (KI) and Lernen (ML) machines. Schwachstellen dies another day Bug-Bounty-Programm von Protect AI Protection AI, a Security Platform for KI-Modelle, in one Vulnerability Report Verröffentlicht. Tools include ChuanhuChatGPT, Lunary and LocalAI as well as larger KI-Sprachmodels (Large Language ModelsLLMs) eingesetzt werden.

Was this a special “KI” and functionality in the Large Language Model (LLM)? Formerly a new Serie with the best Technology Verständnis of Grundlagen

Zwei kritische Sicherheitslücken in Lunary

Gleich zwei schwerwiegende trust Schwachstellen Lunary. Sowohl CVE-2024-7474 what about CVE-2024-7475 haben einen CVSS von 9.1. Let’s choose Zugriffskontrolle from KI-Modellen. cyber angreifer dies SAML-The configuration is up to date and at the same time does not have any authority and does not contain logical information. Bei CVE-2024-7474, using an IDOR-Schwachstelle (Insecure Direct Object Reference), can perform the authentication with another authentication or another action. It is not possible for the Datenzugriff to die the day before. Ah CVE-2024-7473 Lunary has an IDOR-Sicherheitslücke, we have 7.5 hats from a CVSS. This is a cyber security system and is achieved by changing many parameters in real time.

Pfaddurchquerungsfehler on ChuanhuChatGPT

On ChuanhuChatGPT – Creating Functionality for Benutzer-Upload – Find Something Sicherheitslücke. CVE-2024-5982 We have a CVSS and a Pfaddurchquerungsfehler from 9.1. Dieser continues to overcome many things under os.path.join Verwendung, together with the data provided by Benutzereinageben. Dies führt zur Ausführung willkürlichen Codes, zur Erstellung von Verzeichnissen und zur Offenlegung vertraulicher Daten. ChuanhuChatGPT is also the native language of Verzeichnissen and Ladin von Vorlagen.

OWASP Machine Learning Security, Top Ten Analysis of Schwachstellen in ML. (Image: Ağavni - Stock.adobe.com)

Remote Code Ausführung at LocalAI

The Open Source Project LocalAI has this feature from Sicherheitslücken. Native/local Version 2.17.1 applies to Remote Code Redemption. Die Schwachstelle CVE-2024-6983 (CVSS 8.8) Although LocalAI-Backend does nothing with the Configuration data, it is not used with other applications. A link can be created with the data, a data file and Schadcode. This may mean checking to update the System.

Auch die Sicherheitslücke CVE-2024-7010 (CVSS 7.5) New version of LocalAI. This is nothing but a Timing Rage. Diese Art von Cyberattacke also Seitenkanalangriff A system created and developed is Zeit analysis with the use of cryptographic Algorithm, in compliance with the Cryptosystem. This was a very good thing for a Zugriff führt with regard to Kennwortverwaltung an Angreifer und Antwortzeit des Servers Anmeldeinformationen ermitteln.

Bedeutung von KI-Sprachmodellen in Unternehmen

A much better group and gäbe with the KI-Sprach models. Sei es der KI-Boat For Kundent support or automatic documentation. The new Versions are very nice, the setups for the new versions are very nice, the new KI-/ML-Lieferkettes work just as well and their potential is increased. The Protection AI is equipped with Tools called Optimizing the Monat, a previously developed KI-System. In dem ausführlichen Report des Anbieters gibt dieser zudem Empfehlungen für Maßnahmen, die sofort ergriffen werden sollten, wenn Unternehmen diese Tools in Nutzung haben.

Wofür setzen Unternehmen Large Language Models ein?

Basic Sprachmodels (LLMs) are a better solution for better automating and optimizing. For all Flexibility and Möglichkeit, a special model and a special solution, a Werkzeug about the machine, das durch schnelle Entwicklungen im Bereich der Künstlichen Intelligenz continuierlich weiter more information and more information. Master’s and Master’s Degrees include:

1. Automating Prozessen: LLMs are routinely available through Anfragen Services, Text Analytics or Internal Documentation. Automation is made more efficient with a better manual, saving a lot of time and resources.

2. Verbesserung des Kundenservice: During chatbot installations and automated software systems, LLMs can be operated directly, solving problems and providing a security measure for Kunden Sicherstell.

3. Data Analysis and Information Management: Using KI-Sprach models, you can have a large structure to analyze data through Emails, Feedbacks or Feedbacks. So, you can learn more about Trends, Issues and Key Issues.

4. Generierung von Inhalten: Marketing and PR Teams bring together LLMs for Texten, Produktbeschreibungen, Beiträgen auf Social Media and Blogs. Thus, reduce Content Production and make Content Production easier.

5. Schulung and Wissensvermittlung: By using Sprach models in Schulungs programs, you can create a Mitarbeiter that you can make the most of with Data Protection and Compliance. Information and important information in the Echtzeit were part of the Arbeitsalltag.

Damit Deutschland had no Hintertreffen, but there was a gesetzlichen Vorgaben in Umgang with more Klarheit and Sicherheit. (Image: © tippapatt –stock.adobe.com)

(ID:50223743)