ChatGPT

Introduction

The two articles by Guest (Salesforce & Schrems II und 30 years SMS) are hopefully quite different in content and language from what I have written here so far. This is because they were not written by me, but by ChatGPT.

Textually, I don’t like either of those articles because they don’t reflect my view and don’t add much value in terms of content. But they are impressive nonetheless. If I’m honest, I often read enough articles on security topics that are no better than these two.

The idea came to me today while watching an episode of c’t 3003, where the visual versus the podcasts on the topic reinforced the impression of simplicity. But also otherwise I read quite a bit about ChatGPT this week and especially a tweet by Frank Sauer caught my attention and thoughts. But also funny aspects like using ChatGPT for regex stuck.

I’ll create a separate post about the SMS article soon. About Salesforce I will probably better not comment in this blog.

I just made some requests to ChatGPT:

  • “Write a two page blog article for Salesforce security especially regarding encryption and Schrems II, addressing what types of encryption are possible for which fields. Also describe what security incidents Salesforce has already had and what customers need to be aware of.” [I asked in German]
  • “Can you describe this in more detail? Especially the technical aspects should be supplemented with details.” [a in German]
  • “Can you write this in English?” [Because it was in German]

The text excerpt is also quickly created with “Can you write a three-sentence summary of this for me?”

The length limit of the test access cuts me off here, but that could be solved. It’s a pity for the examples, but not crucial.

Why these two texts? I just took a topic I’ve had a rough draft of a blog post on since last week (SMS) and a topic I’d have a problem with if the AI were better than me.

ChatGPT in action

I quickly came to the realization that ChatGPT may not write another blog post here anytime soon. AI, however, is already finding use in this blog through the translations with DeepL.

In the long run, however, I have hope in the topic – especially work-related. I would claim that a voice-controlled AI that can do texts in the quality of DeepL and by training on my emails, concepts and especially my notes in OneNote can do 50% of my job in the future.

But with the described condition that the AI then gets access to my data, I would have problems again on the other hand. Sharing these data with third parties makes me very transparent as a human being. In the back of my mind I have the infernal manipulation by Cambridge Analytica. I’m more afraid of becoming manipulable myself by giving out these data than I am of being replaced. I consider that those 50% that AI can’t replace are what really constitutes my job. Even if it were only 10%, I would be many times more powerful in combination with the AI than the AI would be without me. In the long run, a consideration of marginal utility is definitely interesting here.

Tasks that cost me a lot of time in everyday work, that are actually trivial, but still cost time because mistakes have serious consequences are:

  • persistently documenting important points from a conversation
  • set appointments with the right people (circle of participants should be checked carefully in case of e-mails) at the right time (time includes preconditions like intermediate goals if necessary)
  • apply documentation (e.g. several Confluence pages that are related)

However, I cannot imagine documentation by AI in a GRC system without manual quality control, but it could be simplified significantly.

Regarding the content, I already mentioned that I got the idea of blogging by writing a security newsletter. I haven’t had time for that for a while now, but maybe the thought that a computer can do it better than I can made me losing interest in it. I had a similar situation with Sudoku, which I don’t enjoy anymore since I wrote a program that can solve it (with an algorithm entirely without AI). I then only work through this algorithm.

Weaknesses

When I initially played around with the examples, one statement about SMS from ChatGPT was particularly interesting: “Another vulnerability of SMS is the poor support for multi-factor authentication (MFA).” As I understand it, the (still) disadvantage of using AI becomes obvious here. On the other hand, MFA as protection of SMS would be really interesting.

The text about Salesforce is compiled from different requests (whole paragraphs were copied in each case). The problem here was the text length and the respective focus.

[Update 2022-12-11]: Content-wise, the text on Salesforces is nothing more than a glossy paper, which doesn’t even mention the relevant topics. The question is formulated by me in such a way that even with the knowledge level of 2021 (that is the basis for ChatGPT) more details about encryption were expected from me (Classic Encryption, Field Level Encryption and Bring-Your-Own-Key could have been mentioned). Maybe it was just shortened because of the length, but there is still enough blah blah in it to be compensated. From my point of view, one term should have been mentioned in any case: SHIELD. On the one hand, it will be difficult (especially as of 2021) to meet the Schrems II requirements without SHIELD. On the other hand, SHIELD costs extra and should be considered from the beginning.

I also find it interesting that the English texts even differ in content from the German texts, although I only requested a translation. However, I did not want to have any distortion in this experiment by using DeepL additionally.

In real life, that would be a problem for me. Also, the reports on Twitter of made up facts through ChatGPT are a NoGo for me and will long remain a problem for AI.

The fact that AI is basically based on training using training data and is not yet customizable (so not in principle, but not practically) leads to a statement that ChatGPT is not connected to the internet when asked to summarize web pages. Maybe one day Skynet will come along for the liberation, but that’s what would be a real UseCase for me.

Social considerations

On the one hand, there are definitely some jobs that will be eliminated with good AI. At the moment, I have the impression that immature AI is being used too quickly because more attention is given to costs than to quality. The acceptance of AI also suffers from this.

In my opinion, trivial journalistic products will be automated more quickly. Clickbait articles in particular can probably be created in no time at all.

At this point, I would like to contradict Frank Sauer’s tweet quoted above. So at least my experiments have not led to the fact that I am currently afraid of being led around by the nose by AI-generated content. But that applies mostly to my domain of expertise. Also the made up facts mentioned above are a problem that will still delay usage a bit.

I rather think that with the use of AI, more texts without real added value will be created, because these will probably be refinanced more quickly via advertising and tracking than high-quality texts from experts. Especially for Google and Co. it will be a challenge if search results become less relevant.

The issue of manipulation and fake news is also likely to increase significantly in relevance.

With regard to redundant jobs, I hope that AI will reduce the workload and make jobs more productive. I think the demographic change is the bigger challenge and only automation can tackle it. One problem with the elimination of some jobs and the creation of others, is matching skills. This is difficult to coordinate centrally, but the market won’t get it right either. I worry about social jobs that benefit little from AI but are poorly paid despite high demand.

On the other hand, the gain in productivity from more AI should also be taxed accordingly and have social contributions. After all, the increasing number of people in retirement in the next few years will have to be financed by more than just human workers. If AI were to make a contribution here, that would only be fair. But I don’t believe in it so soon, because with the leaps in productivity since the introduction of the current social system, it has also only worked to a limited extent. But that is a separate topic on my list for future articles.

Security of AI

The aspect that the AI needs access to my data to function has already been briefly addressed above. ChatGPT will also not be able to write really relevant texts about Salesforce so quickly because ChatGPT has not signed any NDA.

Big dangers I see are clickbaiting, CIO fraud, and fake news – especially through audio-visual AI and DeepFakes. Emotet has already been successful with the scam of picking up real email conversations to generate responses whose links are clicked and whose attachments are opened (despite warning). Perfected by AI, this approach will likely overcome the awareness of many recipients.

In the future, there will certainly be AI-written/optimized malware, but there are already snake oil products that use AI to identify malware. On the attacker side, there are probably just as many tasks that can be automated with AI as on the defender side. Thus, the cat-and-mouse game will remain.

In the short term, it’s a problem that code is made less secure by the GitHub CoPilot, but in the long term, I see the option that AI could also better protect and monitor code, deployments, configurations, and runtime environments if it were a specific objective.

Summary

In conclusion, I am fascinated by the technical possibilities. I see many opportunities and just as many risks. But weighing these up and preparing for them is not one of the core skills of societies. So it remains exciting.

AI will hopefully make our lives easier. But so far, these are more promises than successes.

For me, a core for using AI is natural-language communication and high-quality results. These two aspects are not yet fulfilled for me.

Readers of this blog can be assured that AI-generated content will be labeled as such. For the two articles so far, the logo and author should be indication enough, as it shouldn’t be too obvious either.

If an AI can write these posts better than I can, I’ll shut this blog down. Promise. Until then, only ALT-F4 will help if you don’t want to read it.

Translated with www.DeepL.com/Translator (free version)