Whispers & Screams
And Other Things

Classful IP Addressing (IPv4)

cisco-ccna-subnetting-02IP addressing is among the most important topics in any examination of TCP/IP. The IP address is a 32 bit binary identifier which, when configured correctly, enables each machine on an IP network to be uniquely identified. It is used to allow communication with any specific device on the network.

An IP address is defined in software and is configured dynamically as needed by software whether controlled by a human or a software process (as opposed to a MAC address which is a permanent, hard coded hardware address which cannot be easily changed). IP addressing was designed to allow media independent communication between any two hosts on the same, or different, IP networks.

Terminology

As a precursor to looking at IP Addressing in some detail, lets define some basic terminology.

Byte - A byte is a unit of binary information in that most commonly consists of eight bits. In the course of this post, the term Octet will also be used to represent one and the same thing.

IP Address - An IP address is a 32 bit binary number which represents, when assigned to a network device, its unique Network Layer (Layer 3) address. IP addresses are commonly described in Dotted Decimal notation for ease of human readability. Dotted Decimal notation is the conventional way of describing an IP address eg. 192.168.1.1 and is formed by separating the 32 bit IP address into 4 x 8-bit Octets, converting each Octet into a decimal number between 0 and 255 and separating each of these Octets with a dot. An IP Address is also frequently referred to as a Network Address and the terms can be used interchangeably however IP Address is by far the most common.

Broadcast Address - On any IP Network, the Broadcast Address is the address used to send to all hosts which are members of and connected to the IP Network.

IP Addressing

As mentioned previously, an IP address is made up of 32 binary bits. It is extremely important to always bear this fact in mind when working with IP addresses as failing to do so can significantly impair ones ability to fully understand and manipulate the IP addressing system as required.
IP addresses are commonly described in one of three ways -

    1. Dotted Decimal (As described above)

 

    1. Binary (As a 32 bit binary number)

 

    1. Hexadecimal (Rarely used but can be seen when addresses are stored within programs or during packet analysis)



One important aspect of an IP address is that it is an hierarchical address. This has a number of advantages not least of which is the fact that it enables addresses to be aggregated together which greatly simplifies the mechanisms which are used to route traffic around the Internet.
In IPv4 there are 4.3 billion IP addresses available in theory and without this mechanism for route aggregation it would be necessary for Internet routers to know the location of each one of these connected devices.
The hierarchical system used by IPv4 is one which separates the IP address into two components, namely a network part and a host part.
In practice this "two component" system is further split down as the host part is frequently subdivided into even smaller subnetworks. In this post however we will limit or discussion to the "two component" system.

This term, "subnetwork", (often abbreviated to subnet) is one which is used frequently within the network engineering community to such an extent that it has become part of the jargon of the trade. This has only served to enhance its status as a term which has a great deal of complexity behind it but it is actually extremely simple. A subnetwork (subnet) is any subdivision of a larger network. It really is as simple as that.
The Two Component System / Network part and Host part

In order to make IP addresses hierarchical, a two component system has been created. This system splits the IP address into two parts known as The Network Part and The Host Part. This can be likened to a telephone number where (typically) the first 4 or 5 digits represent the town or city and the subsequent 6 or 7 digits represent the individual line.
The designers of this hierarchical addressing scheme created 5 classes of IP address by splitting up the full range of 4.3 billion addresses in a logical way. These 5 classes (or subdivisions) are known as Class A, B, C, D, and E networks.
For the purposes of this post we shall concern ourselves primarily with classes A, B and C however I shall briefly introduce each of the classes in the following section.
The 5 Network Classes
The image below depicts the 5 classes of IP Network as well as some of the basic features associated with each.

Screenshot_2

Class A - Class A networks were designed for use in networks which needed to accommodate a very large number of hosts.
As can be seen from the diagram, the first bit in a Class A address is always 0.
In each of network classes A, B and C, we can also see that the addresses are split into two parts, namely Network and Hosts.
These parts can be likened to the two parts of the telephone number described earlier.
The Network part is like the city code and the Host part is like the rest of the telephone number.
As you can see from the image, the division between the Network and Host part is set after the 8th bit. This means that we have 7 bits available to represent different Networks and 24 bits available to represent the individual hosts within each of the Class A networks.
It is clear therefore that, since the first bit must always be 0, the lowest network address available is 00000000.X.X.X (0 in decimal) and the highest network address available is 01111111.X.X.X (127 in decimal).
It would seem therefore that the range of addresses available to Class A networks is 0.X.X.X up to 127.X.X.X (Where X represents the Host part) but I shall demonstrate later that the 0 and 127 networks are reserved therefore the Class A address range runs from 1.X.X.X to 126.X.X.X in practice.

Class B - In Class B networks, the split between the network part and the host part happens after the 16th bit.
In any Class B network address the first two bits must always be set to 10. This leaves 14 bits to define the network number and allows addresses to range from 10000000.00000000.X.X up to 10111111.11111111.X.X .
These binary addresses equate in decimal to the first two Octets of Class B addresses ranging from 128.0.X.X up to 191.255.X.X
Class C - The pattern now emerging is that Class C addresses use the first 3 Octets to define the Network part of their addresses. Again, as with Class A and B networks some bits are permanently defined and in the case of Class C network addresses, the first 3 bits are always set to 110.
This means that we have 21 bits available to define the network part of Class C network addresses ranging (in binary) from 11000000.00000000.00000000.X up to 11011111.11111111.11111111.X which in decimal equates to 192.0.0.X up to 223.255.255.X

Class D - Class D (224-239) is reserved for Multicast Addressing and a post based explicitly on this addressing will be published ASAP and linked to from here. Class D addressing is beyond the scope of this post however if required please click this link for more detail. Class D Networks and IP Multicasting. .

Class E - Class E (240-255) is reserved for scientific experimentation and research and if any subsequent posts on this blog examine Class E networks, they will be linked to from here.

Continue reading
1450 Hits
0 Comments

Lightweight Directory Access Protocol (LDAP)

wpid-d53372ab83ca060500bfdd46e1045836ldap2Sometimes traditional network engineers who arrive at the networking industry via the world of telecommunications can often find themselves unfamiliar with certain facets of the industry. Such facets can include network security and servers. A protocol which lies at the intersection between network security  and server technology is LDAP which stands for Lightweight Directory Access Protocol.

 

 

 

So what is LDAP and what is it used for? Lets take a look at the protocol in some detail.


 

Within the OSI model, LDAP sits at layer 7 and is, as such, an application layer protocol. LDAP is also an "Open" protocol which means that its standards are public information and it is not associated with or owned by any individual commercial organisation. Its primary purpose is to act as a protocol for accessing and maintaining distributed directory information services over an IP network having been specified to act seamlessly as part of a TCP/IP modeled network.


 



The most common usage for LDAP is to provide a mechanism for a "single sign on" across a distributed multi facility IT estate in order to minimise the authentication across multiple services. LDAP is based on a subset of the more heavily specified and older X500 protocol which was designed to be compatible with the more abstract OSI model.


 



When people talk about “LDAP”, they are really talking about the complex combination of business rules, software and data that allow you to log in and get access to secure resources.


 

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port and UDP port 389 and 636 for LDAPS. Global Catalog is available by default on ports 3268, and 3269 for LDAPS. The client then sends an operation request to the server, and the server sends responses in return. With some exceptions, the client does not need to wait for a response before sending the next request, and the server may send the responses in any order. All information is transmitted using Basic Encoding Rules (BER). These types of encodings are commonly called type-length-value or TLV encodings. The LDAP server hosts something called the directory-server database. As such, the LDAP protocol can be thought of loosely as a network enabled database query language.


 

The client may request the following operations:StartTLS — use the LDAPv3 Transport Layer Security (TLS) extension for a secure connection
Bind — authenticate and specify LDAP protocol version
Search — search for and/or retrieve directory entries
Compare — test if a named entry contains a given attribute value
Add a new entry
Delete an entry
Modify an entry
Modify Distinguished Name (DN) — move or rename an entry
Abandon — abort a previous request
Extended Operation — generic operation used to define other operations
Unbind — close the connection (not the inverse of Bind)

 

 

As was alluded to above, the directory-server database is indeed a database and, as a database, is structured in accordance with the rules of its own schema. The contents of the entries in an LDAP domain are governed by a directory schema, a set of definitions and constraints concerning the structure of the directory information tree (DIT).


 



The schema of a Directory Server defines a set of rules that govern the kinds of information that the server can hold. It has a number of elements, including:


 



Attribute Syntaxes—Provide information about the kind of information that can be stored in an attribute.
Matching Rules—Provide information about how to make comparisons against attribute values.
Matching Rule Uses—Indicate which attribute types may be used in conjunction with a particular matching rule.
Attribute Types—Define an object identifier (OID) and a set of names that may be used to refer to a given attribute, and associates that attribute with a syntax and set of matching rules.
Object Classes—Define named collections of attributes and classify them into sets of required and optional attributes.
Name Forms—Define rules for the set of attributes that should be included in the RDN for an entry.
Content Rules—Define additional constraints about the object classes and attributes that may be used in conjunction with an entry.
Structure Rule—Define rules that govern the kinds of subordinate entries that a given entry may have.
Attributes are the elements responsible for storing information in a directory, and the schema defines the rules for which attributes may be used in an entry, the kinds of values that those attributes may have, and how clients may interact with those values.


 

Clients may learn about the schema elements that the server supports by retrieving an appropriate subschema subentry.


 

The schema defines object classes. Each entry must have an objectClass attribute, containing named classes defined in the schema. The schema definition of the classes of an entry defines what kind of object the entry may represent - e.g. a person, organization or domain. The object class definitions also define the list of attributes that must contain values and the list of attributes which may contain values.


 

For example, an entry representing a person might belong to the classes "top" and "person". Membership in the "person" class would require the entry to contain the "sn" and "cn" attributes, and allow the entry also to contain "userPassword", "telephoneNumber", and other attributes. Since entries may have multiple ObjectClasses values, each entry has a complex of optional and mandatory attribute sets formed from the union of the object classes it represents. ObjectClasses can be inherited, and a single entry can have multiple ObjectClasses values that define the available and required attributes of the entry itself. A parallel to the schema of an objectClass is a class definition and an instance in Object-oriented programming, representing LDAP objectClass and LDAP entry, respectively.


 

Directory servers may publish the directory schema controlling an entry at a base DN given by the entry's subschemaSubentry operational attribute. (An operational attribute describes operation of the directory rather than user information and is only returned from a search when it is explicitly requested.)


 

Server administrators can add additional schema entries in addition to the provided schema elements. A schema for representing individual people within organizations is termed a white pages schema.


 

We will go on in subsequent posts to examine some of the concepts described here in more detail.
Continue reading
1307 Hits
0 Comments

An Introduction to layer 4 handling of RT traffic on satellite networks.

slideSatellite telecommunications is, by its very nature, prone to long propagation delays and higher error rates which can impair the performance of the TCP protocol and most specifically the use of TCP to transport real time applications. At Apogee Internet, the use of satellite broadband services to enable the use of such services is a core component of the services delivered. As such therefore, it is important to understand these effects and how they impact the efficiency of the TCP exchange and the consequent streaming video delivery.

In this regard, we have examined the field using a framework of techniques which can serve to maximise the usability of these channels and in some cases to simply ensure they are usable in the first place. There are various implementations of TCP that can be used which enhance protocol performance by means of adjusting the role of acknowledgements or delaying them.

Most existing solutions do not live up to the requirements of today’s real time applications which at best results in inefficient utilisation of bandwidth and in extreme cases can affect the transponder in use quite dramatically.

Satellite systems have evolved through the delivery of television services to the point where nowadays, they have an integrated part to play in any national broadband IP delivery strategy. With their ubiquitous reach and ability to broadcast, todays core communications satellites enable the delivery of time sensitive information over macrogeographical areas. These systems however do have their drawbacks such as bandwidth asymmetry. Also, due to the inherent propagation delays involved in transmission across such vast distances, these networks always have a high Bandwidth Delay Product (BDP) and can certainly be described as Elephant Networks (LFN’s).

These long transmission distances also result in low power channels which in turn bring about high relative Bit Error Rates which are always higher than terrestrial networks.

The mainstream layer 4 protocols in use today are not best placed to make efficient use of these conditions. TCP for example, built on the principles of Slow Start, Congestion Management and Additive Increase Multiplicative Decrease was designed for far more error free networks such as hard wired networks demonstrates that it is manifestly unsuitable for use in heterogeneous network environments such as satellite links.

TCP has three major shortfalls in these circumstances.

1                     Ineffective Bandwidth Utilisation

2                     Chatty Congestion Prevention Mechanisms

3                     Wasteful Windowing

In future posts, we shall go on to examine the implications of this shortcoming in the layer 4 mechanisms as well as ways to mitigate the undesirable effects.

Continue reading
1247 Hits
0 Comments

Could ants power Web3.0 to new heights? OSPF v's ANTS

Having recently completed my latest M.Eng block on the subject of "Natural and Artificial Intelligence", I became aware of advances made in the recent decade towards a new paradigm of network traffic engineering that was being researched. This new model turns its back on traditional destination based solutions, (OSPF, EIGRP, MPLS) to the combinatorial problem of decision making in network routing  favouring instead a constructive greedy heuristic which uses stochastic combinatorial optimisation. Put in more accessible terms, it leverages the emergent ability of sytems comprised of quite basic autonomous elements working together, to perform a variety of complicated tasks with great reliability and consistency.

In 1986, the computer scientist Craig Reynolds set out to investigate this phenomenon through computer simulation. The mystery and beauty of a flock or swarm is perhaps best described in the opening words of his classic 1986 paper on the subject:

The motion of a flock of birds is one of nature’s delights. Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate. A flock ... is made up of discrete birds yet overall motion seems fluid; it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence dicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world.

An analogy with the way ant colonies function has suggested that the emergent behaviour of ant colonies to reliably and consistently optimise paths could be leveraged to enhance the way that the combinatorial optimisation problem of complex network path selection is solved.

The fundamental difference between the modelling of a complex telecommunications network and more commonplace problems of combinatorial optimisation such as the travelling salesman problem is that of the dynamic nature of the state at any given moment of a network such as the internet. For example, in the TSP the towns, the routes between them and the associated distances don’t change. However, network routing is a dynamic problem. It is dynamic in space, because the shape of the network – its topology – may change: switches and nodes may break down and new ones may come on line. But the problem is also dynamic in time, and quite unpredictably so. The amount of network traffic will vary constantly: some switches may become overloaded, there may be local bursts of activity that make parts of the network very slow, and so on. So network routing is a very difficult problem of dynamic optimisation. Finding fast, efficent and intelligent routing algorithms is a major headache for telcommunications engineers.

So how you may ask, could ants help here? Individual ants are behaviourally very unsophisticated insects. They have a very limited memory and exhibit individual behaviour that appears to have a large random component. Acting as a collective however, ants manage to perform a variety of complicated tasks with great reliability and consistency, for example, finding the shortest routes from their nest to a food source.

These behaviours emerge from the interactions between large numbers of individual ants and their environment. In many cases, the principle of stigmergy is used. Stigmergy is a form of indirect communication through the environment. Like other insects, ants typically produce specific actions in response to specific local environmental stimuli, rather than as part of the execution of some central plan. If an ant's action changes the local environment in a way that affects one of these specific stimuli, this will influence the subsequent actions of ants at that location. The environmental change may take either of two distinct forms. In the first, the physical characteristics may be changed as a result of carrying out some task-related action, such as digging a hole, or adding a ball of mud to a growing structure. The subsequent perception of the changed environment may cause the next ant to enlarge the hole, or deposit its ball of mud on top of the previous ball. In this type of stigmergy, the cumulative effects of these local task-related changes can guide the growth of a complex structure. This type of influence has been called sematectonic. In the second form, the environment is changed by depositing something which makes no direct contribution to the task, but is used solely to influence subsequent behaviour which is task related. This sign-based stigmergy has been highly developed by ants and other exclusively social insects, which use a variety of highly specific volatile hormones, or pheromones, to provide a sophisticated signalling system. It is primarily this second mechanism of sign based sigmergy that has been successfully simulated with computer models and applied as a model to a system of network traffic engineering.

In the traditional network model, packets move around the network completely deterministically. A packet arriving at a given node is routed by the device which simply consults the routing table and takes the optimum path based on its destination. There is no element of probability as the values in the routing table represent not probabilities, but the relative desirability of moving to other nodes.

In the ant colony optimisation model, virtual ants also move around the network, their task being to constantly adjust the routing tables according to the latest information about network conditions. For an ant, the values in the table are probabilities that their next move will be to a certain node.The progress of an ant around the network is governed by the following informal rules:

    • Ants start at random nodes.

 

    • They move around the network from node to node, using the routing table at each node as a guide to which link to cross next.

 

    • As it explores, an ant ages, the age of each individual being related to the length of time elapsed since it set out from its source. However, an ant that finds itself at a congested node is delayed, and thus made to age faster than ants moving through less choked areas.

 

    • As an ant crosses a link between two nodes, it deposits pheromone however, it leaves it not on the link itself, but on the entry for that link in the routing table of the node it left. Other 'pheromone' values in that column of the nodes routing table are decreased, in a process analogous to pheromone decay.

 

    • When an ant reaches its final destination it is presumed to have died and is deleted from the system.R.I.P.



Testing the ant colony optimisation system, and measuring its performance against that of a number of other well-known routing techniques produced good results and the system outperformed all of the established mechanisms however there are potential problems of the kind that constantly plague all dynamic optimisation algorithms. The most significant problem is that, after a long period of stability and equilibrium, the ants will have become locked into their accustomed routes. They become unable to break out of these patterns to explore new routes capable of meeting new conditions which could exist if a sudden change to the networks conditions were to take place. This can be mitigated however in the same way that evolutionary computation introduces mutation to fully explore new possibilities by means of the introduction of an element of purely random behaviour to the ant.

'Ant net' routing has been tested on models of US and Japanese communications networks, using a variety of different possible traffic patterns. The algorithm worked at least as well as, and in some cases much better than, four of the best-performing conventional routing algorithms. Its results were even comparable to those of an idealised ‘daemon’ algorithm, with instantaneous and complete knowledge of the current state of the network.

It would seem we have not heard the last of these routing antics.... (sorry, couldnt resist).

Continue reading
2177 Hits
0 Comments

Forget about 3G, here comes 4G (LTE)

The LTE hits just keep coming: Chunghwa Telecom said this week that it plans to start testing LTE with Ericsson gear, in northern Taiwan. Meanwhile, in Japan, Ericsson customer NTT DoCoMo has started its 4G upgrade. It plans to launch commercially in 2010.

Along with Cisco's recently approved purchase of Starent Networks, these are the latest moves in a market that is rapidly heating up, putting a spotlight on the opportunities for infrastructure vendors. Ericsson has been in the spotlight all week, since Swedish incumbent TeliaSonera launched the first commercial LTE network on Monday, using equipment from Ericsson as well as Huawei.

It’s likely that an infrastructure vendor battle will soon heat up as more trials get underway. Huawei is looking like a big threat to the Tier 1 vendors; it’s signed on to 25 trials and deployments worldwide, it says, including plans to integrate Belgium incumbent Belgacom’s GSM, HSPA and future LTE networks in a converged radio access network and all-IP core. The Chinese vendor will also replace Belgacom’s existing RAN supplier, which happens to be Nokia Siemens Networks.

Also, Telecom Italia said it is working with Huawei for an LTE trial in Turin.

That said, NSN and Alcatel-Lucent are determined to also be a part of the LTE story. NSN recently announced that global operator Telefónica will run a six-month 4G trial in the Czech Republic on NSN’s end-to-end LTE solution. Meanwhile, it also has been tackling the voice-over-LTE goal, and completed successful IMS-compliant voice calls and SMS messaging using 3GPP-standardized LTE equipment, and says it will also soon conduct VoLTE test calls with a fully implemented IMS system.

Not to be outdone, Alcatel-Lucent said that it too has called and texted across standard LTE equipment, but using the interim standard from the 3GPP known as VoLGA.

The first carriers out of the gate after TeliaSonera with the 4G broadband technology – which promises 20mbps to 40mbps in throughput, initially – will likely be Verizon Wireless and NTT DoCoMo. Regional carriers MetroPCS and U.S. Cellular also have plans to deploy LTE next year, along with KDDI in Japan, and Tele2 and Telenor in Europe. AT&T and China Mobile are planning LTE rollouts for 2011. Most incumbents have LTE on their to-do list at some point, making for a rich new vein for infrastructure vendors to mine.

Some markets will be richer than others. "Spectrum availability is the primary factor impacting deployment plans," said senior ABI analyst Nadine Manjaro. "In countries where telecommunications regulators are making appropriate spectrum available, many operators have announced plans to launch LTE. These include the U.S., Sweden, China and others. Where no such spectrum allocations exist, operators are postponing LTE plans." The United Kingdom, surprise surprise, will likely be slower to roll out LTE because of spectrum availability.

Continue reading
1891 Hits
0 Comments