jazzy-software - The Blog

jazzy-software, updated Friday, 29. March 2024

gigabyte.jpg

About artificial intelligence

A few years ago “Blockchain”, today “Artificial Intelligence” is the solution to all problems. Whether it fits the problem or not.

The hype surrounding ChatGPT this year has cooled down somewhat after its creative use of facts became known several times. Nevertheless, AI remains on everyone's lips as a new miracle technology, from heads of government and commissions to Microsoft and Oracle.

Oracle Database 23c is here: natural language SQL queries

AI also for Oracle's new database 23c: The features are intended to make developers more productive and clerks to be able to evaluate their mountains of data themselves.

[...]

This should allow developers to formulate SQL queries in natural language, which the system automatically translates into SQL. Work on this is already very advanced: According to Andy Mendelsson, Executive Vice President, Database Server Technologies Development, the current accuracy of such automatically generated SQL queries is 60 to 70 percent. “The AI experts tell me that 70 percent is a very good value that can hardly be increased even with better training data,” is his assessment of the accuracy of natural language queries.

https://www.heise.de/news/Oracle-Datenbank-23c-ist-da-SQL-Abfragen-in-natuerlicher-Sprache-9317378.html

The idea is not that new, however. As early as 1985, Symantec's "Q&A" database was sold, which had a natural language query function. This was based on a vocabulary of 600 words and worked surprisingly well. The system became obsolete when MS Windows became established and users preferred clicking through selection lists with the mouse rather than formulating queries and entering them with the keyboard.

While F&A back then ran on an 8086 processor with a few 100 kilobytes of main memory and far exceeded Oracle's hit rate of 70%, today the AI is anything but "green". The desktop ChatGPT offshoot, gpt4all, runs as a minimum requirement on a computer with at least 6 cores and 16 GB of memory. A query then takes several minutes, during which all processors are running at full load. If you want to train your own data, 32 GB of graphics memory is the minimum requirement.

Training large language models is an energy-intensive process with an immense CO2 footprint. Microsoft is already planning to power its data centers with small nuclear power plants (SMR).

https://futurism.com/the-byte/microsoft-power-train-ai-small-nuclear-reactors

The question is whether the energy expenditure is justified by the result. Jürgen Schmidt draws a conclusion that agrees with my experiences:

"... in the areas in which I have good to very good knowledge, it quickly became apparent that ChatGPT's answers are often inaccurate, unhelpful in terms of understanding and sometimes downright wrong. This not only makes ChatGPT unsuitable as a teacher, but even downright dangerous."

https://www.heise.de/hintergrund/Hintergrund-zum-Quiz-ChatGPT-erfindet-Security-7491475.html

The Oracle example above does not show a good result but rather a disqualification for the task at hand. But it is the license to collect data without restraint that keeps the AI hype going. In the name of science and security, like here at Microsoft:

"In the future, the company says it wants to expand its protection solutions such as Microsoft Threat Analysis Center (MTAC) to include AI tools. With this, they want to detect and analyze threats more effectively, among other things. Customers should benefit directly from this. Microsoft states that more than "To get 65 [American] trillion signals from devices. AI approaches should optimize the evaluation and make it easier to find a needle in a haystack."

https://www.heise.de/news/Microsoft-will-IT-Security-besser-machen-und-startet-Secure-Future-Initative-9352878.html

So telemetry is already sending home more data than Microsoft can process. AI is supposed to process the mass of data and thus supposedly correct what has been messed up by mass marketing of unsuitable concepts and a lack of quality control in software development, as here:

Microsoft Exchange is impacted by four zero-day vulnerabilities that attackers can exploit remotely to execute arbitrary code or disclose sensitive information on affected installations.

https://www.bleepingcomputer.com/news/microsoft/new-microsoft-exchange-zero-days-allow-rce-data-theft-attacks/

According to the Chamber of Industry and Commerce, something like this is fate and there's nothing you can do about it:

With the company website, the daily email traffic from employees or networked processes, your company is also on the daily wheel of fortune of cyber attacks.

https://www.it-sicherheitstag-ihk-nrw.de/Start.html

In particular, it appears that you cannot replace notoriously insecure software with more secure solutions.

When it comes to collecting data, there are also desires from other quarters:

Europol wants to collect chat control data indefinitely

Europol wants unfiltered access to chat control data to train AI algorithms. This emerges from internal documents that we publish. Two former EU police officers switched to the US organization Thorn, which is lobbying massively for the planned law.

https://netzpolitik.org/2023/interne-dokumente-europol-will-chatkontrolle-daten-unbegrenzt-sammeln/

The European Commission then says it like this:

Trustworthy artificial intelligence (AI) can bring many benefits, such as better healthcare, safer and greener transportation, more efficient manufacturing, and cheaper and more sustainable energy. The EU's approach will give citizens the confidence to use this technology and encourage companies to develop it.

https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/excellence-and-trust-artificial-intelligence_de

We have already discussed the sustainable environmental friendliness of AI above. The EU Commission mentions three EU-funded projects on this page, of which I would like to point out the last one:

WeVerify

An open source platform aiming to engage communities and citizen journalists alongside newsroom and freelance journalists for collaborative, decentralised content verification, tracking, and debunking.

https://weverify.eu

AFP and Deutsche Welle are also involved. The analysis of image and video material for fakes may still be acceptable. There is also the “Coordinated Inauthentic Behavior Detection Tree”, which says:

Although a clear and widely accepted definition of CIB is currently missing, the concept is growing consensus among the platform, especially because it allows the platform to work on account removal without looking at and judging the content.

https://weverify.eu/news/coordinated-inauthentic-behaviour-detection-tree/

It's about the automatic removal of unwanted content "without looking at and judging the content". The AI should be used for automatic account removal from social networks, in other words: it should carry out censorship tasks.

There are two questions that arise here:

1. Is the AI itself capable and trustworthy enough to do this?

2. Is automatic censorship compatible with the constitutionally enshrined freedom of expression?

Conclusion

The danger of AI clearly does not lie in its “intelligence”. It lies in its application by those responsible who have no idea about the technology and its limitations, but see AI as a welcome instrument for controlling data and people. As we all know, data is the new oil, and AI is supposed to be the drilling rig. The IT industry itself will be happy to keep the myth alive, given its full funding pool and customers willing to pay.

This will likely lead to AI being given decisions that influence the economy, governments, laws and our lives. Decisions that no one can understand and for which no one is responsible afterwards. What could possibly go wrong?

Write a comment

Your email adress will not be published, but we will send you an activation link.