Whispers & Screams
And Other Things

The Web By Proxy

I've been working on networks for decades and for as long as I can remember, network proxies have existed. I first came across the idea when I worked for IBM as an SNA programmer back in the late 90s but it's in more recent years that network proxies have taken on more importance. 

Continue reading
2596 Hits
0 Comments

What on earth is making my home network so slow! (Part 1)

Let's face it, we've all been there. Sitting wondering why on earth a network connection that, up until 5 minutes ago had been working just fine was now all but useless. Less tech savvy individuals may just shrug their shoulders and try again later but anybody else is left wondering why. As a reader of this blog post that fact automatically places you in the latter category. So, to the problem. Could it be that somebody else in the house has started a large download? If that's the case its the easiest to solve just by asking around but the plethora of devices that are in our houses today make the job a lot more complex. For me it was a long forgotten mobile phone owned by my son, left on charge under the bed and set to auto update its code and apps that proved the final straw and drove me to come up with a solution to this problem.

Lets look at the problem in the round first of all. Homes nowadays usually have a router which connects off to the cable company or to the telephone line. These routers allow all of the devices in the house to connect to the net whether on the wireless or the wired side of life. Its not uncommon for a home network to support 10 to 20 devices not all of which will be known about by every other member of the household. Any one of these devices has the potential to bring the network to its knees for hours at an end by starting a large download. Of course the possibility also exists that somebody else on the outside has gained access to your network and it's important that this is not overlooked.

The first step in getting a handle on the situation will be to take control of your home router and secure it so that it cannot be manipulated by anybody else. Most home routers nowadays have a small, cut-down, webserver running on board which allows a management user to access the management web page. By using this web page clients can change all of the settings on the device. The page is usually accessible by both the wired and the wireless network. If you are using a Windows machine the easiest way to establish a connection to this page is to do the following:

    1. Click the pearl button and in the box which says "search programs and files" type cmd and press enter. This should bring up a window which looks like that shown on the right. Inside this window, type the command "ipconfig". The output should also resemble that shown on the right showing among other things, the address of the default gateway. Take a careful note of this address. (192.168.1.1 in this case)

 

    1. Open up a browser, type this default gateway address into the address bar and click enter. If your router is new or poorly configured you should now be looking at the control page for the device. If the device is configured properly you should now be looking at a login prompt page.

 

    1. Once logged in you will then be able to control the settings of the router.



This post is not written to be a guide for any specific router so I will keep any further instructions necessarily wide in scope.

The following bullets will link to posts that will be made available soon which examine the different aspects of this problem. Check back soon to see them when they become available.

    • Who is connected? Checking to understand which devices are connected to your router on WIFI and wired networks and establishing whether or not they should be.

 

    • What are they doing? Most routers show a basic table of transferred bandwidth as a part of their reporting. This can be used to examine the usage on your network and ascertain which devices are consuming most of the network.

 

    • Securing my router. As touched on previously, the router should be configured appropriately so that only those users whom you wish to have access are able to access both the network and the routers management page.

 

    • Customising the routers code. Home routers purchased off the shelf nowadays have woefully inadequate firmware that is frequently shown to be buggy at best and insecure at worst. Consider replacing this firmware with a fully customisable open source router such as dd-wrt or tomato.

 

    • Open source router management. (Wireshark and SNMP) Want to take the control of your home network to the max. Consider implementing network management, bandwidth management and device management.



I hope this post has proved informative as an intro to controlling your home network. Check back soon for further updates.

Continue reading
2561 Hits
2 Comments

Enhancing Oil,Gas and Power Operations - SCADA via Rustyice Satellite Solutions

Oil and gas operations are located in unforgiving environments, from the blistering cold of the arctic to the scorching heat of the deserts and the storming conditions out on the open sea. To sustain secure operating conditions in these remote areas, reliable communication is as vital to the end-user as the umbilical cord is to an unborn child.

 

Supervisory Control And Data Acquisition

Supervisory control and data acquisition (SCADA) is a unique aspect of oil, gas and power distribution operations in that it does not entail communication between people, but between machines, also known as machine–machine (M2M).

SCADA describes a computer based system that manages mission critical process applications on the ‘factory floor’. These applications are frequently critical for health, safety and the environment.

The term telemetry is often used in combination with SCADA. Telemetry describes the process of collating data and performing remotely controlled actions via a suitable transmission media. In the context of this article, the telemetry media is a satellite communications solution.

SCADA in Oil, Gas and Power Distribution Operations

SCADA is not limited to a particular aspect of these types of operations. In the Oil and Gas industry, SCADA applications can be found in upstream areas such as well monitoring, downstream in areas such as pipeline operations, in trade by managing the fiscal metering/custody transfer operations and logistics in applications such as inventory management of tank storage facilities. SCADA systems in the Power Distribution industry use RTUs and PLCs to perform the majority of on-site control. The RTU or PLC acquires the site data, which includes meter readings, pressure, voltage, or other equipment status, then performs local control and transfers the data to the central SCADA system. However, when comparing and specifying a solution for challenging SCADA environments, RTU and PLC-based systems are not equal.

PLC Systems are Sub-Optimal for Complex SCADA Systems

Originally designed to replace relay logic, PLCs acquire analog and/or digital data through input modules, and execute a program loop while scanning the inputs and taking actions based on these inputs. PLCs perform well in sequential logic control applications with high discrete I/O data counts, but suffer from overly specialized design, which results in limited CPU performance, inadequate communication flexibility, and lack of easy scalability when it comes to adding future requirements other than I/O.
With the rapid expansion of remote site monitoring and control, three critical industry business trends have recently come into focus:

• System performance and intelligence – Process automation improves efficiency, plant safety, and reduces labor costs. However, complex processes like AGA gas flow calculations and high-resolution event capture in electric utility applications require very high performance and system-level intelligence. The reality is that even high-performance PLCs cannot meet all these expectations.

• Communication flexibility – Redundant communication links between remote systems and the central SCADA application form the basis of a reliable, secure, and safe enterprise. Power routing automation in electric applications, water distribution, warning systems, and oil and gas processes all require unique communication mediums including slow dial-up phone lines, medium speed RF, and broadband wired/wireless IP.

• Configurability and reduced costs – Although process monitoring and control are well defined and understood within many industries, the quest for flexibility and reduced Total Cost of Ownership (TCO) remains challenging. In the past, proprietary PLC units customized with third party components filled the niche, but suffered from lack of configurability and higher maintenance costs than fully integrated units. Today, businesses look for complete modular off-the shelf systems that yield high configurability with a significant improvement in TCO.

At the technical level, several requirements currently influence the SCADA specification process:
• Local intelligence and processing – High processing throughput, 64 bit CPUs with expanded memory for user applications and logging with support for highly complex control routines.

• High-speed communication ports – Monitoring large numbers of events requires systems that support multiple RS232/485 connections running at 230/460 kb/s and multiple Ethernet ports with 10/100 Mb/s capability.

• High-density, fast, and highly accurate I/O modules Hardware that implements 12.5 kHz input counters with 16-bit analog inputs and 14-bit analog outputs for improved accuracy.

• Broadband wireless and wired IP communications. Recent innovations in IP devices demands reliable connectivity to local IEDs (Intelligent Electronic Devices) as well as emerging communication network standards.

• Strict adherence to open standard industry protocols including Modbus, DNP3, and DF-1 on serial and TCP/IP ports

• Robust protocols for support of mixed communication environments.

• Protection of critical infrastructure – Enhanced security such as password-protected programming, over the air encryption, authentication, and IP firewall capability.

Selecting a Satellite Communication Solution – Factors to Consider

Security

When selecting a satellite communications solution, there are numerous factors that must be considered. Enterprise applications like e-mail, Internet access, telephony, videoconferencing, etc. frequently tie into public communications infrastructure. Due to security and reliability considerations it is considered best practice to isolate mission critical SCADA communications infrastructure from public networks.

The Rustyice solution is a dedicated satellite communications network solution tailored for the SCADA applications environment. By virtue of system design, our solution offers greater security against hacker attacks and virus infestation which mainly target computers that are connected to the Internet and are running office applications.

Reliability

Due to the critical nature of most SCADA operations, a reliable communication solution is of utmost importance. The satellite communications industry is mature with a proven track record. Satellite transponder availability is typically in the 99.99 percentile range, a number far superior to that of terrestrial networks. To build on this strength, our solution utilises a miniature satellite hub that is deployed at the end-users SCADA control centre. Data to/from the remote terminal units (RTUs) are piped directly into the SCADA system. There is no vulnerable terrestrial back-haul from a communication service providers facility, which can cause the entire network to crash if cut during public works, i.e. digging.

To increase the reliability of the hub, it is frequently deployed in a redundant/load sharing configuration. This ensures that the hub is available more than 100% of the time, making it far from the weakest link in the communication chain.

Types of Connectivity

Contrary to enterprise-related communications which take place randomly, SCADA communication is quite predictable. It is a continuous process, where the SCADA application polls the RTUs at regular intervals. The outgoing poll request is a short datagram (packet) containing as few as 10 bytes. The returned data from the RTUs are also in a datagram format with the message size being from 10 bytes to 250 bytes. One could easily assume that a satellite solution based upon dial-up connectivity such as Inmarsat, Iridium or Globalstar would be ideal for this application environment. Since SCADA is not just data collection, but also entails control (which at times can be of an emergency nature), you simply cannot wait for the system to encounter a busy connection. What is needed is a system that provides an ‘always on’ type of connection, commonly referred to as leased line connectivity.

A Rustyice solution supports both circuit switched (leased line and multi drop) and packet switched (TCP/IP and X.25) applications concurrently.

Continue reading
1377 Hits
0 Comments

Network Functions Virtualization on the Software-Defined Network

banner_inter_urbanIn the modern Telecom industry, driven by the fast changing demands that a connected society makes of it, a huge number of new applications are emerging such as IPX, eHealth, Smart Cities and the Internet of Things. Each of these emergent applications requires new customisations of the ecosystem to manage traffic through a wide variety of service providers.

This is the core challenge faced by todays infrastructure, but we must also not overlook the fact that to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. Service providers are placed in the awkward space between the end users and the emergent technologies but it is the fact that these technologies and their business models are often emerging on a month to month basis that presents the greatest challenge.

If we consider all the IT assets ISP's and Telcos have at their Points of Presence it represents a significant and very much underused resource. The holy grail for many of these organisation is to be able to unlock all of this storage and computing capacity, and turn it into a virtualized resources. This strategy opens up some intriguing possibilities such as bringing remote resource to bear during times of heavy compute load at a specific locale from areas where capacity is less constrained. In infrastructure terms, this cloud-oriented world of adding new network capacity whenever and wherever it is needed is a matter of merely sliding more cards into racks or deploying new software which greatly lowers the cost of scaling the network hardware by commoditising the components used to build up a service providers infrastructure.

Agility of services is the key to this new world order where services can be created orders of magnitude more quickly than was traditionally the case. In this new model the division between content providers and service providers becomes blurred. The flexibility to manage this dynamism is the key to the industry being able to meet the demands that the connected society will increasingly place on it and it will be those players who are able to manage this balancing act most effectively that will come out on top.

This is where NFV comes in. The advent of Network Function Virtualization, or NFV, has strong parallels to those developments in the computing world that gave us the cloud, big data and other commodity computing advances. Using capacity where and when it is required with a lot less visibility into the physical location of the network than is needed currently presents a whole new set of unique challenges. As computing hardware has developed and become more capable, a greater level of software complexity has taken place by its side.

The management of NFV will be critical to its operation, and the way that end user functionality is moving to the cloud today represents a sneak preview of this. We’re seeing a preview of that as computing scales to the cloud. A lot of careful design consideration will be required and service providers need to begin adapting their infrastructure today to accommodate this future virtualization.

Closely related and indeed an enabler to the trend of NFV is the Software-Defined Network, or SDN. The SDN, or Software Defined Networking can provide improved network efficiency and better cost savings, allowing the network to follow-the-sun, turning down servers or network hardware when the load lightens, or even turning them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10 p.m. to 6 a.m., you will see a significant decrease in the cost of electricity and cooling.

The continued integration of technologies such as Openflow into the latest and greatest network management implementations will further enable this trend as we increasingly see these OSS and BSS systems seek to pre-empt their traditional reactive mechanisms by looking farther up the business model in order to steal vital time with which to maximise the effectiveness of their influence and ultimately maximise the value add of their managed virtualised domains.

Continue reading
1348 Hits
0 Comments

Could ants power Web3.0 to new heights? OSPF v’s ANTS

Having recently completed my latest M.Eng block on the subject of “Natural and Artificial Intelligence“, I became aware of advances made in the recent decade towards a new paradigm of network traffic engineering that was being researched. This new model turns its back on traditional destination based solutions, (OSPF, EIGRP, MPLS) to the combinatorial problem of decision making in network routing  favouring instead a constructive greedy heuristic which uses stochastic combinatorial optimisation. Put in more accessible terms, it leverages the emergent ability of sytems comprised of quite basic autonomous elements working together, to perform a variety of complicated tasks with great reliability and consistency.

In 1986, the computer scientist Craig Reynolds set out to investigate this phenomenon through computer simulation. The mystery and beauty of a flock or swarm is perhaps best described in the opening words of his classic 1986 paper on the subject:

The motion of a flock of birds is one of nature’s delights. Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate. A flock … is made up of discrete birds yet overall motion seems fluid; it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence dicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world.

An analogy with the way ant colonies function has suggested that the emergent behaviour of ant colonies to reliably and consistently optimise paths could be leveraged to enhance the way that the combinatorial optimisation problem of complex network path selection is solved.

The fundamental difference between the modelling of a complex telecommunications network and more commonplace problems of combinatorial optimisation such as the travelling salesman problem is that of the dynamic nature of the state at any given moment of a network such as the internet. For example, in the TSP the towns, the routes between them and the associated distances don’t change. However, network routing is a dynamic problem. It is dynamic in space, because the shape of the network – its topology – may change: switches and nodes may break down and new ones may come on line. But the problem is also dynamic in time, and quite unpredictably so. The amount of network traffic will vary constantly: some switches may become overloaded, there may be local bursts of activity that make parts of the network very slow, and so on. So network routing is a very difficult problem of dynamic optimisation. Finding fast, efficent and intelligent routing algorithms is a major headache for telcommunications engineers.

So how you may ask, could ants help here? Individual ants are behaviourally very unsophisticated insects. They have a very limited memory and exhibit individual behaviour that appears to have a large random component. Acting as a collective however, ants manage to perform a variety of complicated tasks with great reliability and consistency, for example, finding the shortest routes from their nest to a food source. 



These behaviours emerge from the interactions between large numbers of individual ants and their environment. In many cases, the principle of stigmergy is used. Stigmergy is a form of indirect communication through the environment. Like other insects, ants typically produce specific actions in response to specific local environmental stimuli, rather than as part of the execution of some central plan. If an ant’s action changes the local environment in a way that affects one of these specific stimuli, this will influence the subsequent actions of ants at that location. The environmental change may take either of two distinct forms. In the first, the physical characteristics may be changed as a result of carrying out some task-related action, such as digging a hole, or adding a ball of mud to a growing structure. The subsequent perception of the changed environment may cause the next ant to enlarge the hole, or deposit its ball of mud on top of the previous ball. In this type of stigmergy, the cumulative effects of these local task-related changes can guide the growth of a complex structure. This type of influence has been called sematectonic. In the second form, the environment is changed by depositing something which makes no direct contribution to the task, but is used solely to influence subsequent behaviour which is task related. This sign-based stigmergy has been highly developed by ants and other exclusively social insects, which use a variety of highly specific volatile hormones, or pheromones, to provide a sophisticated signalling system. It is primarily this second mechanism of sign based sigmergy that has been successfully simulated with computer models and applied as a model to a system of network traffic engineering.

In the traditional network model, packets move around the network completely deterministically. A packet arriving at a given node is routed by the device which simply consults the routing table and takes the optimum path based on its destination. There is no element of probability as the values in the routing table represent not probabilities, but the relative desirability of moving to other nodes.

In the ant colony optimisation model, virtual ants also move around the network, their task being to constantly adjust the routing tables according to the latest information about network conditions. For an ant, the values in the table are probabilities that their next move will be to a certain node.The progress of an ant around the network is governed by the following informal rules:

    • Ants start at random nodes.

 

    • They move around the network from node to node, using the routing table at each node as a guide to which link to cross next.

 

    • As it explores, an ant ages, the age of each individual being related to the length of time elapsed since it set out from its source. However, an ant that finds itself at a congested node is delayed, and thus made to age faster than ants moving through less choked areas.

 

    • As an ant crosses a link between two nodes, it deposits pheromone however, it leaves it not on the link itself, but on the entry for that link in the routing table of the node it left. Other ‘pheromone’ values in that column of the nodes routing table are decreased, in a process analogous to pheromone decay.

 

    • When an ant reaches its final destination it is presumed to have died and is deleted from the system.R.I.P.



Testing the ant colony optimisation system, and measuring its performance against that of a number of other well-known routing techniques produced good results and the system outperformed all of the established mechanisms however there are potential problems of the kind that constantly plague all dynamic optimisation algorithms. The most significant problem is that, after a long period of stability and equilibrium, the ants will have become locked into their accustomed routes. They become unable to break out of these patterns to explore new routes capable of meeting new conditions which could exist if a sudden change to the networks conditions were to take place. This can be mitigated however in the same way that evolutionary computation introduces mutation to fully explore new possibilities by means of the introduction of an element of purely random behaviour to the ant.

‘Ant net’ routing has been tested on models of US and Japanese communications networks, using a variety of different possible traffic patterns. The algorithm worked at least as well as, and in some cases much better than, four of the best-performing conventional routing algorithms. Its results were even comparable to those of an idealised ‘daemon’ algorithm, with instantaneous and complete knowledge of the current state of the network.

It would seem we have not heard the last of these routing antics…. (sorry, couldnt resist).

Continue reading
1923 Hits
1 Comment