Whispers & Screams
And Other Things

The Web By Proxy

I've been working on networks for decades and for as long as I can remember, network proxies have existed. I first came across the idea when I worked for IBM as an SNA programmer back in the late 90s but it's in more recent years that network proxies have taken on more importance. 

Continue reading
2595 Hits
0 Comments

Network Functions Virtualization on the Software-Defined Network

banner_inter_urbanIn the modern Telecom industry, driven by the fast changing demands that a connected society makes of it, a huge number of new applications are emerging such as IPX, eHealth, Smart Cities and the Internet of Things. Each of these emergent applications requires new customisations of the ecosystem to manage traffic through a wide variety of service providers.

This is the core challenge faced by todays infrastructure, but we must also not overlook the fact that to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. Service providers are placed in the awkward space between the end users and the emergent technologies but it is the fact that these technologies and their business models are often emerging on a month to month basis that presents the greatest challenge.

If we consider all the IT assets ISP's and Telcos have at their Points of Presence it represents a significant and very much underused resource. The holy grail for many of these organisation is to be able to unlock all of this storage and computing capacity, and turn it into a virtualized resources. This strategy opens up some intriguing possibilities such as bringing remote resource to bear during times of heavy compute load at a specific locale from areas where capacity is less constrained. In infrastructure terms, this cloud-oriented world of adding new network capacity whenever and wherever it is needed is a matter of merely sliding more cards into racks or deploying new software which greatly lowers the cost of scaling the network hardware by commoditising the components used to build up a service providers infrastructure.

Agility of services is the key to this new world order where services can be created orders of magnitude more quickly than was traditionally the case. In this new model the division between content providers and service providers becomes blurred. The flexibility to manage this dynamism is the key to the industry being able to meet the demands that the connected society will increasingly place on it and it will be those players who are able to manage this balancing act most effectively that will come out on top.

This is where NFV comes in. The advent of Network Function Virtualization, or NFV, has strong parallels to those developments in the computing world that gave us the cloud, big data and other commodity computing advances. Using capacity where and when it is required with a lot less visibility into the physical location of the network than is needed currently presents a whole new set of unique challenges. As computing hardware has developed and become more capable, a greater level of software complexity has taken place by its side.

The management of NFV will be critical to its operation, and the way that end user functionality is moving to the cloud today represents a sneak preview of this. We’re seeing a preview of that as computing scales to the cloud. A lot of careful design consideration will be required and service providers need to begin adapting their infrastructure today to accommodate this future virtualization.

Closely related and indeed an enabler to the trend of NFV is the Software-Defined Network, or SDN. The SDN, or Software Defined Networking can provide improved network efficiency and better cost savings, allowing the network to follow-the-sun, turning down servers or network hardware when the load lightens, or even turning them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10 p.m. to 6 a.m., you will see a significant decrease in the cost of electricity and cooling.

The continued integration of technologies such as Openflow into the latest and greatest network management implementations will further enable this trend as we increasingly see these OSS and BSS systems seek to pre-empt their traditional reactive mechanisms by looking farther up the business model in order to steal vital time with which to maximise the effectiveness of their influence and ultimately maximise the value add of their managed virtualised domains.

Continue reading
1347 Hits
0 Comments

Lightweight Directory Access Protocol (LDAP)

wpid-d53372ab83ca060500bfdd46e1045836ldap2Sometimes traditional network engineers who arrive at the networking industry via the world of telecommunications can often find themselves unfamiliar with certain facets of the industry. Such facets can include network security and servers. A protocol which lies at the intersection between network security  and server technology is LDAP which stands for Lightweight Directory Access Protocol.

 

 

 

So what is LDAP and what is it used for? Lets take a look at the protocol in some detail.


 

Within the OSI model, LDAP sits at layer 7 and is, as such, an application layer protocol. LDAP is also an "Open" protocol which means that its standards are public information and it is not associated with or owned by any individual commercial organisation. Its primary purpose is to act as a protocol for accessing and maintaining distributed directory information services over an IP network having been specified to act seamlessly as part of a TCP/IP modeled network.


 



The most common usage for LDAP is to provide a mechanism for a "single sign on" across a distributed multi facility IT estate in order to minimise the authentication across multiple services. LDAP is based on a subset of the more heavily specified and older X500 protocol which was designed to be compatible with the more abstract OSI model.


 



When people talk about “LDAP”, they are really talking about the complex combination of business rules, software and data that allow you to log in and get access to secure resources.


 

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port and UDP port 389 and 636 for LDAPS. Global Catalog is available by default on ports 3268, and 3269 for LDAPS. The client then sends an operation request to the server, and the server sends responses in return. With some exceptions, the client does not need to wait for a response before sending the next request, and the server may send the responses in any order. All information is transmitted using Basic Encoding Rules (BER). These types of encodings are commonly called type-length-value or TLV encodings. The LDAP server hosts something called the directory-server database. As such, the LDAP protocol can be thought of loosely as a network enabled database query language.


 

The client may request the following operations:StartTLS — use the LDAPv3 Transport Layer Security (TLS) extension for a secure connection
Bind — authenticate and specify LDAP protocol version
Search — search for and/or retrieve directory entries
Compare — test if a named entry contains a given attribute value
Add a new entry
Delete an entry
Modify an entry
Modify Distinguished Name (DN) — move or rename an entry
Abandon — abort a previous request
Extended Operation — generic operation used to define other operations
Unbind — close the connection (not the inverse of Bind)

 

 

As was alluded to above, the directory-server database is indeed a database and, as a database, is structured in accordance with the rules of its own schema. The contents of the entries in an LDAP domain are governed by a directory schema, a set of definitions and constraints concerning the structure of the directory information tree (DIT).


 



The schema of a Directory Server defines a set of rules that govern the kinds of information that the server can hold. It has a number of elements, including:


 



Attribute Syntaxes—Provide information about the kind of information that can be stored in an attribute.
Matching Rules—Provide information about how to make comparisons against attribute values.
Matching Rule Uses—Indicate which attribute types may be used in conjunction with a particular matching rule.
Attribute Types—Define an object identifier (OID) and a set of names that may be used to refer to a given attribute, and associates that attribute with a syntax and set of matching rules.
Object Classes—Define named collections of attributes and classify them into sets of required and optional attributes.
Name Forms—Define rules for the set of attributes that should be included in the RDN for an entry.
Content Rules—Define additional constraints about the object classes and attributes that may be used in conjunction with an entry.
Structure Rule—Define rules that govern the kinds of subordinate entries that a given entry may have.
Attributes are the elements responsible for storing information in a directory, and the schema defines the rules for which attributes may be used in an entry, the kinds of values that those attributes may have, and how clients may interact with those values.


 

Clients may learn about the schema elements that the server supports by retrieving an appropriate subschema subentry.


 

The schema defines object classes. Each entry must have an objectClass attribute, containing named classes defined in the schema. The schema definition of the classes of an entry defines what kind of object the entry may represent - e.g. a person, organization or domain. The object class definitions also define the list of attributes that must contain values and the list of attributes which may contain values.


 

For example, an entry representing a person might belong to the classes "top" and "person". Membership in the "person" class would require the entry to contain the "sn" and "cn" attributes, and allow the entry also to contain "userPassword", "telephoneNumber", and other attributes. Since entries may have multiple ObjectClasses values, each entry has a complex of optional and mandatory attribute sets formed from the union of the object classes it represents. ObjectClasses can be inherited, and a single entry can have multiple ObjectClasses values that define the available and required attributes of the entry itself. A parallel to the schema of an objectClass is a class definition and an instance in Object-oriented programming, representing LDAP objectClass and LDAP entry, respectively.


 

Directory servers may publish the directory schema controlling an entry at a base DN given by the entry's subschemaSubentry operational attribute. (An operational attribute describes operation of the directory rather than user information and is only returned from a search when it is explicitly requested.)


 

Server administrators can add additional schema entries in addition to the provided schema elements. A schema for representing individual people within organizations is termed a white pages schema.


 

We will go on in subsequent posts to examine some of the concepts described here in more detail.
Continue reading
1307 Hits
0 Comments