Will AI Threaten Cybersecurity Blogging?

A few years ago, an auditor approached me and asked if I had heard about a website that could automatically generate a security policy after answering a few questions.  I was uncertain whether to accept this as a means of simplifying my job, or a veiled critique. As a person who is naturally curious, I tested out this tool, and was surprised at how good it actually was.  However, I did not feel threatened by it.  Now, the emergence of even more sophisticated Artificial Intelligence (AI) writing tools is causing incredible concern across many industries. Could this threaten the blogging community, and could it also impact the cybersecurity community?

The primary reason that I was not threatened by an automated security policy generator is that it did exactly what one would expect when creating a policy. It created clean, dry, emotionless, neutral words with the purpose of explaining rules that can be followed by everyone.  This was definitely a time-saver in the life of a busy cybersecurity professional.

Right now, one of the biggest worries is in education, where teachers are concerned that students will just use AI tools to write term papers, rather than do the research.  Some schools are taking the low-tech, and, sadly, the ineffective approach of blocking the most popular AI sites on the school networks.  More promising, however, is the creation of at least one tool to test whether a piece of work was AI-generated. Forbes magazine has devoted some time to exploring the ethics of AI-generated works. Where does this leave us in the cybersecurity blogging field?

We tested a popular AI engine to see exactly what it could share if we inputted information to answer the question: “Which Industries are most likely to pay Ransomware?”  The results were very promising.  AI used the standard writing format of an introduction, even including thesis questions prior to moving on to the body, where each point was cogently presented, and ended with a conclusion.  A short sample illustrates the point:

Should cybersecurity bloggers, or any other blogger, be concerned about AI? The answer – much like the answer to the instant security policy generator – depends on the information that is being shared. 

From a general cybersecurity perspective, some have predicted that AI will be used to design better phishing attacks, but, conversely, it will also be used to thwart those attacks, which just perpetuates the constant chase to outsmart the attackers.  Essentially, this changes nothing.

It seems that an AI tool can replicate information, but it cannot create ideas. So far, AI generators have written a lot of content based on what already exists. Whether it is an article about a known topic, such as what organizations are likely to pay the ransom for a cyberattack, or even a script for a popular television comedy show, AI can do a fair job of regurgitating information from what already exists on the internet.  It can do so without setting off any plagiarism alarms as well. However, one must remember that it is only capable of gathering facts from a set of parameters provided by, and based upon information previously created by humans. 

In the creative arts, there have always been “schools” that are dedicated to replicating a master’s work. The difference here is the speed at which creative works can be used, possibly in violation of copyright laws.  That will be a problem for legislators to correct. It has long been said that imitation is the greatest form of flattery.  While AI flatters us, it remains to be seen whether it can create an original work that draws new insights from those already known pieces of information.  For now, it would seem that the creative spark still resides with the humans.

Will AI Threaten Cybersecurity Blogging?
Scroll to top