Symmetric Encryption

The attributes of symmetric encryption

What makes encryption symmetric? You know what symmetry is right? So you can hazard a guess… Symmetric encryption algorithms use the same key for both encryption and decryption. They are typically used for blocks of data such as files or on streams of data such as a network data flow. Table 1, shown below, defines six of the main attributes of symmetric encryption.

Table 1: The main attributes of symmetric encryption

AttributeProperty
KeyOne key is shared between two or more entities.
Key exchangeOut of band (communication outside of the main system: for example, sending a text message with the key, password or login code to access your email or banking website) or by using asymmetric encryption.
SpeedFast with minimal complexity.
Key lengthFixed key length: 128-bit, 192-bit or 256-bit, for example.
ApplicationsBulk data encryption: files, databases, logical partitions and physical drives. Protecting data-in-transit: Internet Protocol Security (IPSec), Secure Shell (SSH) and Transport Layer Security (TLS)/Secure Sockets Layer (SSL).
Security propertiesConfidentiality.

Symmetric encryption algorithms

Some of the best known encryption algorithms in use across modern technology are symmetric algorithms. The list includes DES (Data Encryption Standard), 3DES (Triple DES), Twofish, Blowfish, RC4, Serpent and AES(Rijndael).

Data Encryption Standard (DES)

The original DES was published on 15 January 1977 by the National Institute of Standards and Technology (NIST). It is a symmetric block cipher with a 64 bit block size. The block size of a cipher is the fixed number of bits that the cipher operates on during any encryption/decryption operation. It turns out that with a 64 bit block size, you dont have to encrypt a block or stream of data that is too large before you run into problems of block repetition. Still between 1977 and the retirement of DES, the amount of data seemed like a large amount (less so nowadays).

DES uses a 64 bit key however 8 bits are sacrificed as overhead due to error protection/detection meaning the effective key length is 56 bits. With a key length of 56 bits, 72 thousand trillion keys are therefore possible. Thats a lot of possibilities. DES, as well as many other well known symmetric ciphers (3DES, Twofish, Blowfish and RC5) are based on an earlier cipher known as a Feistel cipher. Feistel ciphers have two main advantages. First, they provide structural reusability (the same structure can be used for encryption and decryption); therefore, the decryption logic does not need to be implemented in reverse order. The second advantage is one-way functions. Ill write a post down the line about Feistel ciphers and if Ive done it yet then you can click HERE to go read it. Ok you’re still here so I obviously haven’t written it yet. Get on twitter and give me a nudge. Ok lets look in a little bit more detail about how these encryption algorithms work.

Substitution box (S box) & Permutation box (P box)

Symmetric block ciphers typically use structures called S boxes and P boxes to work with their keys to transform input to output. S boxes along with P boxes work together to obscure the relationship between the key and the ciphertext. This conforms to the Shannon principle of confusion.S boxes are used in many symmetric block ciphers including DES, 3DES, Blowfish, Twofish and the daddy of them all, AES. An S box takes a fixed number of bits on the input and maps it to a fixed number of bits at the output according to an agreed lookup table. The output and input need not be the same bit length. S boxes are designed with diffusion in mind such that changing a single input bit should result in the change of two or more output bits and should also, on average, keep a reasonable balance between 0s and 1s.

Permutation boxes typically come after S boxes and effectively jumble the block in a predetermined way. Again this is done to obfuscate the relationship between the key and the ciphertext. When taken together the S box and the P box make up whats called an SP network and encryption algorithms use a number of these SP networks in succession (known as rounds, as in boxing rounds) to transform the input plaintext block into the output ciphertext block.

Advanced Encryption Standard (AES)

When DES was overtaken by data demands and Moores law and placed into an early grave by being demonstrably brute forced, the world needed to find a successor. That successor needed to be robust enough to last for the foreseeable future so a global competition invited all comers to compete to have their algorithm take the crown and be named as the new Advanced Encryption Standard. In the meantime DES needed to be temporarily shored up and thus was born 3DES which fudged a bit more security but not in a sustainable way.

The competition whittled the entrants down to the final 5 and these 5 went into the final stages where they were tested and challenged to destruction. These finalists were, MARS, RC6, Rijndael, Serpent, and Twofish. Each algorithm had done great to make it this far but there could be only one winner and so two Belgian cryptographers, Vincent Rijmen and Joan Daemen won the contest from 15 shortlisted proposals and Rijndael was announced as the lucky algorithm that would lose its name and forever be known as AES.

Rijndael Encryption Algorithm

Let’s take a look at the winner of the competition, the Rijndael algorithm. Rijndael is a 128 block size algorithm and uses three key lengths, namely 128, 192 and 256 bit keys. Like Feistel it uses rounds of increasing encryption where repetition further obfuscates the plaintext and, like all encryption algorithms which use rounds, there is a tradeoff between rounds and computational overhead. So 128 bit key AES (Rijndael) uses 10 rounds, 192 bit uses 12 and 256 uses 14. Each round (apart from the last) breaks a round into 4 parts, namely, SubBytes, ShiftRows, MixColumns and AddRoundKey. Lets look at each of these in turn.

SubBytes

The SubBytes stage of a Rijndael round is simply the use of an S-Box and the Rijndael S-Box is shown below. This step adds confusion as per the Shannon model.

000102030405060708090a0b0c0d0e0f
00637c777bf26b6fc53001672bfed7ab76
10ca82c97dfa5947f0add4a2af9ca472c0
20b7fd9326363ff7cc34a5e5f171d83115
3004c723c31896059a071280e2eb27b275
4009832c1a1b6e5aa0523bd6b329e32f84
5053d100ed20fcb15b6acbbe394a4c58cf
60d0efaafb434d338545f9027f503c9fa8
7051a3408f929d38f5bcb6da2110fff3d2
80cd0c13ec5f974417c4a77e3d645d1973
9060814fdc222a908846eeb814de5e0bdb
a0e0323a0a4906245cc2d3ac629195e479
b0e7c8376d8dd54ea96c56f4ea657aae08
c0ba78252e1ca6b4c6e8dd741f4bbd8b8a
d0703eb5664803f60e613557b986c11d9e
e0e1f8981169d98e949b1e87e9ce5528df
f08ca1890dbfe6426841992d0fb054bb16
Rijndael SubBytes S-Box

Thus, the 4×4 block (known as a state) is systematically substituted for its replacement byte from the table above and that concludes part 1.

ShiftRows

The shift rows step is similarly straightforward. The 4×4 state which is produced is now passed to this stage which does the following to it.

  • Row 0 (top row) remains unchanged
  • Row 1 (next row down) is shifted to the left by one block
  • Row 2 (next row down) is shifted to the left by two blocks
  • Row 3 (next row down) is shifted to the left by three blocks

Therefore, at the conclusion of this stage, the state is now further obfuscated from the original.

MixColumns

The state produced after the ShiftRows stage is thus passed to the MixColumns stage. In this step, each column is matrix multiplied with a 4×4 circulant (a special type of square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector) matrix. I don’t propose to go too far into the linear algebra of this step but let’s just suffice it to say that the translation taking place at this stage is limited to remain within a Galois field.

So, the preceding two steps afford the translation with its diffusion as per the Shannon model. Also note that this step is not competed during the final round as it would not add value to the encrypted output at that stage.

AddRoundKey

The final stage is the AddRoundKey stage. In this stage the state is effectively XORed with the round key which is a derived sub key which is generated using a separate algorithm applied to the initial main key.

AES Rijndael (Overview)

So to conclude, examining the entire process in its entirety we have the following:

  1. Initial plaintext block input
  2. Block is XORed with the initial main key
  3. Round 1 begins
    1. R1 SubBytes
    2. R1 ShiftRows
    3. R1 MixColumns
    4. R1 Round key XORed
    5. Round 2 begins
  4. (Rounds repeat until penultimate round concludes)
  5. Final Round SubBytes
  6. Final Round ShiftRows
  7. Final Round key XORed
  8. Ciphertext block complete

Conclusion

To conclude we can summarise that symmetric key algorithms are some of the most robust, secure and widely used algorithms in the cryptography toolbox. Their strengths include convenience and speed and they are used primarily for block encryption of data at rest such as files, directories, volumes and even full disk drives as well as for data in motion such as flow data in transit as streams on browsers for example. The original DES algorithm, although eventually overtaken by technological progress gave rise to a number of extremely secure algorithms one of which is now known as the Advanced Encryption Standard AES.

Could ants power Web3.0 to new heights? OSPF v’s ANTS

Having recently completed my latest study on the subject of “Natural and Artificial Intelligence“, I became aware of advances made in the recent decade towards a new paradigm of network traffic engineering that was being researched. This new model turns its back on traditional destination based solutions, (OSPFEIGRPMPLS) to the combinatorial problem of decision making in network routing  favouring instead a constructive greedy heuristic which uses stochastic combinatorial optimisation. Put in more accessible terms, it leverages the emergent ability of sytems comprised of quite basic autonomous elements working together, to perform a variety of complicated tasks with great reliability and consistency.

In 1986, the computer scientist Craig Reynolds set out to investigate this phenomenon through computer simulation. The mystery and beauty of a flock or swarm is perhaps best described in the opening words of his classic 1986 paper on the subject:

The motion of a flock of birds is one of nature’s delights. Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate. A flock … is made up of discrete birds yet overall motion seems fluid; it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence dicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world.

An analogy with the way ant colonies function has suggested that the emergent behaviour of ant colonies to reliably and consistently optimise paths could be leveraged to enhance the way that the combinatorial optimisation problem of complex network path selection is solved.

The fundamental difference between the modelling of a complex telecommunications network and more commonplace problems of combinatorial optimisation such as the travelling salesman problem is that of the dynamic nature of the state at any given moment of a network such as the internet. For example, in the TSP the towns, the routes between them and the associated distances don’t change. However, network routing is a dynamic problem. It is dynamic in space, because the shape of the network – its topology – may change: switches and nodes may break down and new ones may come on line. But the problem is also dynamic in time, and quite unpredictably so. The amount of network traffic will vary constantly: some switches may become overloaded, there may be local bursts of activity that make parts of the network very slow, and so on. So network routing is a very difficult problem of dynamic optimisation. Finding fast, efficent and intelligent routing algorithms is a major headache for telcommunications engineers.

So how you may ask, could ants help here? Individual ants are behaviourally very unsophisticated insects. They have a very limited memory and exhibit individual behaviour that appears to have a large random component. Acting as a collective however, ants manage to perform a variety of complicated tasks with great reliability and consistency, for example, finding the shortest routes from their nest to a food source.



These behaviours emerge from the interactions between large numbers of individual ants and their environment. In many cases, the principle of stigmergy is used. Stigmergy is a form of indirect communication through the environment. Like other insects, ants typically produce specific actions in response to specific local environmental stimuli, rather than as part of the execution of some central plan. If an ant’s action changes the local environment in a way that affects one of these specific stimuli, this will influence the subsequent actions of ants at that location. The environmental change may take either of two distinct forms. In the first, the physical characteristics may be changed as a result of carrying out some task-related action, such as digging a hole, or adding a ball of mud to a growing structure. The subsequent perception of the changed environment may cause the next ant to enlarge the hole, or deposit its ball of mud on top of the previous ball. In this type of stigmergy, the cumulative effects of these local task-related changes can guide the growth of a complex structure. This type of influence has been called sematectonic. In the second form, the environment is changed by depositing something which makes no direct contribution to the task, but is used solely to influence subsequent behaviour which is task related. This sign-based stigmergy has been highly developed by ants and other exclusively social insects, which use a variety of highly specific volatile hormones, or pheromones, to provide a sophisticated signalling system. It is primarily this second mechanism of sign based sigmergy that has been successfully simulated with computer models and applied as a model to a system of network traffic engineering.

In the traditional network model, packets move around the network completely deterministically. A packet arriving at a given node is routed by the device which simply consults the routing table and takes the optimum path based on its destination. There is no element of probability as the values in the routing table represent not probabilities, but the relative desirability of moving to other nodes.

In the ant colony optimisation model, virtual ants also move around the network, their task being to constantly adjust the routing tables according to the latest information about network conditions. For an ant, the values in the table are probabilities that their next move will be to a certain node.The progress of an ant around the network is governed by the following informal rules:

  • Ants start at random nodes.
  • They move around the network from node to node, using the routing table at each node as a guide to which link to cross next.
  • As it explores, an ant ages, the age of each individual being related to the length of time elapsed since it set out from its source. However, an ant that finds itself at a congested node is delayed, and thus made to age faster than ants moving through less choked areas.
  • As an ant crosses a link between two nodes, it deposits pheromone however, it leaves it not on the link itself, but on the entry for that link in the routing table of the node it left. Other ‘pheromone’ values in that column of the nodes routing table are decreased, in a process analogous to pheromone decay.
  • When an ant reaches its final destination it is presumed to have died and is deleted from the system.R.I.P.



Testing the ant colony optimisation system, and measuring its performance against that of a number of other well-known routing techniques produced good results and the system outperformed all of the established mechanisms however there are potential problems of the kind that constantly plague all dynamic optimisation algorithms. The most significant problem is that, after a long period of stability and equilibrium, the ants will have become locked into their accustomed routes. They become unable to break out of these patterns to explore new routes capable of meeting new conditions which could exist if a sudden change to the networks conditions were to take place. This can be mitigated however in the same way that evolutionary computation introduces mutation to fully explore new possibilities by means of the introduction of an element of purely random behaviour to the ant.

‘Ant net’ routing has been tested on models of US and Japanese communications networks, using a variety of different possible traffic patterns. The algorithm worked at least as well as, and in some cases much better than, four of the best-performing conventional routing algorithms. Its results were even comparable to those of an idealised ‘daemon’ algorithm, with instantaneous and complete knowledge of the current state of the network.

It would seem we have not heard the last of these routing antics.