Whispers & Screams
And Other Things

The Joy Of Driving

As anyone who has driven on the UK's congested motorways will attest to, when the roads get beyond a critical threshold of overload, the sheer unpredictability of the drivers around you becomes the most important factor in your cognition. All it takes is for one driver to touch the brake pedal, lighting up the brake lights and a chain reaction of terror ensues in their wake. If an accident is luckily avoided then its almost a certainty that one of those frustratingly inexplicable causeless traffic jams will ensue. 

I have always believed in the power of computer network traffic engineering techniques to come to our aid in situations like this. Just like on a crowded pavement, the unpredictability of the individual has made such a solution frustratingly out of reach.

But it seems that automation and machine learning have brought this notion a step closer. By abdicating control to our machines, network traffic theory can be put into practice ensuring that optimal flow continues.

We have always lacked a way for vehicles to work together until recently and it is this collaborative effort overseen and perhaps controlled by a meta intelligence that can bring about the seismic change that has eluded us.

For my own part I detest most driving. Its basically dead time where my brain has to be used for this one mind numbing task despite the fact that I'd much rather be reading a book, getting some work done or even just sleeping. The day when I can tell my car where I want it to go and then switch off until Im there will be a red letter day for me. I was therefore recently pleased to hear the results of some recent research confirming that in tests, a fleet of driverless cars collaborating with each other can improve overall traffic flow by at least 35%.

Michael He, one of the researchers was quoted thus, "Autonomous cars could fix a lot of different problems associated with driving into, within and between cities but there has to be a way for them to work together."

The key will lie in the adoption of standards and, just like during the development of the standards which now dominate the internet, we are in a period of competition where the standard which wins out may not be the best. (Think ATM vs Ethernet for transporting video and VHS vs Betamax for watching it.)

Much of the current testing and development is done using scale models and SBC such as Raspberry Pi or Orange Pi. This enables researchers to avoid the prohibitive costs associated with developing full scale test environments. Using such swarm systems where the component nodes within the network are each able to communicate at least with their neighbours, it became possible for the overarching 'intelligence' to manage the meta priority for optimal traffic flow in such a way as to achieve something approaching harmony in a ballet of competing priorities and near misses that would send most human drivers to the hard shoulder. Cars can now be packed more closely and yet continue to enjoy progress towards the destination in environments which were previously untenable if populated by unpredictable humans.

Interestingly these tests involved simulating a mix of human and automata with the overall network collaboration level set to either egocentric or cooperative. Improvements of 35% were observed during cooperative traffic but during egocentric driving the improvement was as much as 45%.

Machine learning and swarm software modelling is bringing this field of imagined utopia into reality with staggering speed and for this driver, the day when I can tell my car where I'm going and then put my feet up can't come a moment too soon.

 
Continue reading
2847 Hits
0 Comments

The Web By Proxy

I've been working on networks for decades and for as long as I can remember, network proxies have existed. I first came across the idea when I worked for IBM as an SNA programmer back in the late 90s but it's in more recent years that network proxies have taken on more importance. 

Continue reading
2467 Hits
0 Comments

What on earth is making my home network so slow! (Part 1)

Let's face it, we've all been there. Sitting wondering why on earth a network connection that, up until 5 minutes ago had been working just fine was now all but useless. Less tech savvy individuals may just shrug their shoulders and try again later but anybody else is left wondering why. As a reader of this blog post that fact automatically places you in the latter category. So, to the problem. Could it be that somebody else in the house has started a large download? If that's the case its the easiest to solve just by asking around but the plethora of devices that are in our houses today make the job a lot more complex. For me it was a long forgotten mobile phone owned by my son, left on charge under the bed and set to auto update its code and apps that proved the final straw and drove me to come up with a solution to this problem.

Lets look at the problem in the round first of all. Homes nowadays usually have a router which connects off to the cable company or to the telephone line. These routers allow all of the devices in the house to connect to the net whether on the wireless or the wired side of life. Its not uncommon for a home network to support 10 to 20 devices not all of which will be known about by every other member of the household. Any one of these devices has the potential to bring the network to its knees for hours at an end by starting a large download. Of course the possibility also exists that somebody else on the outside has gained access to your network and it's important that this is not overlooked.

The first step in getting a handle on the situation will be to take control of your home router and secure it so that it cannot be manipulated by anybody else. Most home routers nowadays have a small, cut-down, webserver running on board which allows a management user to access the management web page. By using this web page clients can change all of the settings on the device. The page is usually accessible by both the wired and the wireless network. If you are using a Windows machine the easiest way to establish a connection to this page is to do the following:

    1. Click the pearl button and in the box which says "search programs and files" type cmd and press enter. This should bring up a window which looks like that shown on the right. Inside this window, type the command "ipconfig". The output should also resemble that shown on the right showing among other things, the address of the default gateway. Take a careful note of this address. (192.168.1.1 in this case)

 

    1. Open up a browser, type this default gateway address into the address bar and click enter. If your router is new or poorly configured you should now be looking at the control page for the device. If the device is configured properly you should now be looking at a login prompt page.

 

    1. Once logged in you will then be able to control the settings of the router.



This post is not written to be a guide for any specific router so I will keep any further instructions necessarily wide in scope.

The following bullets will link to posts that will be made available soon which examine the different aspects of this problem. Check back soon to see them when they become available.

    • Who is connected? Checking to understand which devices are connected to your router on WIFI and wired networks and establishing whether or not they should be.

 

    • What are they doing? Most routers show a basic table of transferred bandwidth as a part of their reporting. This can be used to examine the usage on your network and ascertain which devices are consuming most of the network.

 

    • Securing my router. As touched on previously, the router should be configured appropriately so that only those users whom you wish to have access are able to access both the network and the routers management page.

 

    • Customising the routers code. Home routers purchased off the shelf nowadays have woefully inadequate firmware that is frequently shown to be buggy at best and insecure at worst. Consider replacing this firmware with a fully customisable open source router such as dd-wrt or tomato.

 

    • Open source router management. (Wireshark and SNMP) Want to take the control of your home network to the max. Consider implementing network management, bandwidth management and device management.



I hope this post has proved informative as an intro to controlling your home network. Check back soon for further updates.

Continue reading
2506 Hits
2 Comments

Too Much Information - Hadoop and Big Data

hHadoop, a free, Java-based programming framework that makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes, supports the processing of large amounts of data in a distributed computing environment and is part of the Apache project sponsored by the Apache Software Foundation. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. This approach lowers the risk of catastrophic system failure, even if a significant number of nodes become inoperative.

Hadoop was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Doug Cutting, Hadoop's creator, named the framework after his child's stuffed toy elephant. The current Apache Hadoop ecosystem consists of the Hadoop kernel, MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper.

The Hadoop framework is used by major players including Google, Yahoo and IBM, largely for applications involving search engines and advertising. The preferred operating systems are Windows and Linux but Hadoop can also work with BSD and OS X.

The rapid proliferation of unstructured data is one of the driving forces of the new paradigm of big data analytics. According to one study, we are now producing as much data every 10 minutes as was created from the beginning of recorded time through the year 2003.1 The preponderance of data being created is of the unstructured variety -- up to about 90%, according to the IDC.

Big data is about being able to not just capture a wide variety of unstructured data, but to also capturing that data and combining it with other data to gain new insights that can be used in many ways to improve business performance. For Instance, in retail, it could mean delivering faster and better services to customers; in research, it could mean conducting tests over much wider sampling sizes; in healthcare, it could mean faster and more accurate diagnoses of illnesses.

The ways in which big data will change our lives is significant, and just beginning to reveal itself for those who are willing to capture, combine, and discover answers to their Big Questions. For big data to deliver on the promise of its vast potential, however, technology must be in place to enable organizations to capture and store massive amounts of unstructured data in its native format. That’s where Hadoop has become one of the enabling data processing technologies for big data analytics. Hadoop allows for dramatically bigger business questions to be answered, that we are already starting to see realized from large public cloud companies, which will shortly infiltrate into other IT oriented industries and services.

More than 50% of participating companies have begun implementing the available Hadoop frameworks as data hubs or auxiliary data repositories to their existing infrastructures, according to Intel’s 2013 IT Manager’s Survey on How Organizations are Using Big Data. In addition, 31% more organizations reported evaluating one of open-source Apache Hadoop framework.

So what are the key characteristics IT professionals should know about Hadoop in order to maximize its potential in managing unstructured data and advancing the cause of big data analytics? Here are five to keep in mind:

    1. Hadoop is economical. As an open-source software framework, Hadoop runs on standard servers. Hardware can be added or swapped in or out of a cluster, and operational costs are relatively low because the software is common across the infrastructure, requiring little tuning for each physical server.

 

    1. Hadoop provides an efficient framework for processing large sets of data. MapReduce is the software programming framework in the Hadoop stack. Simply put, rather than moving data across a network to be processed, MapReduce provides a framework to move the processing software to the data.3 In addition to simplifying the processing of big data sets, MapReduce also provides programmers with a common method of defining and orchestrating complex processing tasks across clusters of computers.

 

    1. Hadoop supports your existing database and analytics infrastructures, and does not displace it. Hadoop can handle data sets and tasks that can be a problem for legacy databases. In big data environments, you want to make sure that the underlying storage and infrastructure platform for the database is capable of handling the capacity and speed of big data initiatives, particularly for mission-critical applications. Because of this capacity it can and has been implemented as a replacement to existing infrastructures, but only where it fits the business need or advantage

 

    1. Hadoop will provide the best value where it is implemented with the right infrastructure. The Hadoop framework typically runs on mainstream standard servers using common Intel® server hardware. Newer servers with the latest Intel® computing, larger memory footprint, and more cache will typically provide better performance. In addition, Hadoop will perform better with faster in node storage, so systems should contain some amount of solid-state storage. In addition, the storage infrastructure should be optimized with the latest advances in automated tiering, deduplication, compression, encryption, erasure coding and thin provisioning. When Hadoop has scaled to encompass larger datasets it benefits from faster networks, so then 10Gb Ethernet rather than typical 1GbE bandwidth provides further benefit.

 

    1. Hadoop is supported by a large and active ecosystem. Big data is a big opportunity, not just for those using it to deliver competitive advantage, but also to those providing solutions. A large and active ecosystem has developed quickly around Hadoop, as it usually does around open-source solutions. As an example, Intel recently invested $740 million dollars into the leading distribution for Hadoop provided by Cloudera. Vendors are available to provide all or part of the Hadoop stack, including management software, third-party applications and a wide range of other tools to help simplify the deployment of Hadoop.



Unstructured data is growing nonstop across a variety of applications, in a wide range of formats. Those companies that are best able to harness it and use it for competitive advantage are seeing significant results and benefits. That’s why more than 80% of the companies surveyed by Intel are using, implementing or evaluating Hadoop.

Continue reading
1551 Hits
0 Comments

Network Functions Virtualization on the Software-Defined Network

banner_inter_urbanIn the modern Telecom industry, driven by the fast changing demands that a connected society makes of it, a huge number of new applications are emerging such as IPX, eHealth, Smart Cities and the Internet of Things. Each of these emergent applications requires new customisations of the ecosystem to manage traffic through a wide variety of service providers.

This is the core challenge faced by todays infrastructure, but we must also not overlook the fact that to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. Service providers are placed in the awkward space between the end users and the emergent technologies but it is the fact that these technologies and their business models are often emerging on a month to month basis that presents the greatest challenge.

If we consider all the IT assets ISP's and Telcos have at their Points of Presence it represents a significant and very much underused resource. The holy grail for many of these organisation is to be able to unlock all of this storage and computing capacity, and turn it into a virtualized resources. This strategy opens up some intriguing possibilities such as bringing remote resource to bear during times of heavy compute load at a specific locale from areas where capacity is less constrained. In infrastructure terms, this cloud-oriented world of adding new network capacity whenever and wherever it is needed is a matter of merely sliding more cards into racks or deploying new software which greatly lowers the cost of scaling the network hardware by commoditising the components used to build up a service providers infrastructure.

Agility of services is the key to this new world order where services can be created orders of magnitude more quickly than was traditionally the case. In this new model the division between content providers and service providers becomes blurred. The flexibility to manage this dynamism is the key to the industry being able to meet the demands that the connected society will increasingly place on it and it will be those players who are able to manage this balancing act most effectively that will come out on top.

This is where NFV comes in. The advent of Network Function Virtualization, or NFV, has strong parallels to those developments in the computing world that gave us the cloud, big data and other commodity computing advances. Using capacity where and when it is required with a lot less visibility into the physical location of the network than is needed currently presents a whole new set of unique challenges. As computing hardware has developed and become more capable, a greater level of software complexity has taken place by its side.

The management of NFV will be critical to its operation, and the way that end user functionality is moving to the cloud today represents a sneak preview of this. We’re seeing a preview of that as computing scales to the cloud. A lot of careful design consideration will be required and service providers need to begin adapting their infrastructure today to accommodate this future virtualization.

Closely related and indeed an enabler to the trend of NFV is the Software-Defined Network, or SDN. The SDN, or Software Defined Networking can provide improved network efficiency and better cost savings, allowing the network to follow-the-sun, turning down servers or network hardware when the load lightens, or even turning them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10 p.m. to 6 a.m., you will see a significant decrease in the cost of electricity and cooling.

The continued integration of technologies such as Openflow into the latest and greatest network management implementations will further enable this trend as we increasingly see these OSS and BSS systems seek to pre-empt their traditional reactive mechanisms by looking farther up the business model in order to steal vital time with which to maximise the effectiveness of their influence and ultimately maximise the value add of their managed virtualised domains.

Continue reading
1298 Hits
0 Comments