What is ChatGPT anyway? What impact will AI have on our everyday lives and what are the risks of using it?
We asked our data protection officer, Mr. Voswinkel, DATEV eG, about this:
What is artificial intelligence?
The intention is for devices to be able to act predictively. The ability to perceive impressions and react to them appropriately with reference to the respective context is to be implemented in the systems. Put simply, the goal is to store these impressions as knowledge and reapply them.
And what is ChatGPT?
The abbreviation GPT. This stands for "Generative Pre-trained Transformer". This is a text generator that converts the user's requests into answers and assistance with the help of artificial intelligence based on information learned in advance. An underlying database is used for this purpose. ChatGPT is easy to use, like currently known search engines on the net. The difference is that ChatGPT "supposedly" takes away the searching and evaluating of search results and does not only present links or short answers.
How are the risks to be assessed?
The use of the results of a search can lead to data protection violations such as missing information on the legal basis of data processing and further disregard of information obligations.
In addition, text generators and AI are increasingly being used by cybercriminals to formulate contextual emails that even mimic the language, syntax and usage of real people by feeding the system with previous email correspondences. In this way, even existing email correspondences can be continued with natural-sounding language and content that seems to make sense. This makes it even more difficult to detect fake e-mails.
How to deal with this in the professional environment?
Currently, the legislative bodies of the European Union as well as other numerous countries worldwide are working on drafts for the implementation of AI regulations/laws. The intention is to regulate the implementation of AI and make it safe and transparent.
Thus, reality is faster than legislation. Therefore, especially until the laws are implemented, systems should be handled with care and no disclosure of sensitive information such as trade secrets or personal data should be made. Employees need to know what information or questions may be entered into the AI system.
The urgency of internal guidelines on this and their controlled implementation is demonstrated quite emphatically by the fact that one of the large American technology groups has banned internal official use of the system it developed and operates itself with sensitive information. If the companies that profit directly from the use of their own system are already exercising such caution, this should also be a clear signal to everyone else.