Whispers & Screams
And Other Things

Isn't Satellite Communication Old School Now

Space travel has and continues to fascinate us. As humans it will always be our intrinsic instinct to explore and discover whatever lies over the next horizon. Such was the motivation for the space race which ultimately provided the world with satellite communications amongst many other things. When we look back at the grainy pictures from that febrile time in history however what we see is a world which looks very different to that of today. Indeed most of the sci-fi of the 1960's was set around about now. As people of their future looking back it all seems rather quaint to us but the benefits we have enjoyed from satellite communications have been many and varied. Since the launch of Telstar, satellite communications has enabled us to beam the finger of mass communications to every corner of the planet

The above notwithstanding, our world today is criss-crossed by undersea cables between every continent and across every sea. Satellite communication (or SATCOM as we will refer to it moving forward) would seem to no longer be necessary... or is it? Lets take a look at the benefits it brought to us at its genesis.

Satcom is the ultimate mobile technology. It provides us with the possibility for cable free communications across the whole footprint of a beam. For some types of spacecraft such a footprint can cover many hundreds of thousands of square miles from only one beam. A single spacecraft can support many beams. That we can utilise this technology anywhere within the beam is such an enormous asset that it completely revolutionises our activity in the remotest areas of the planet. It is now possible to call your mum from a rowing boat in the middle of the atlantic ocean on mothers day, or indeed on any day. Such is the ease of use that is possible using technology no more byzantine than a satellite phone. 

Satellite comms is also relatively cheap although the person who owns the satellite phone may ask you to keep it brief whilst calling your mum. Mobile terminals are however cheap and cheerful when examined in the context of global communication methods. They can also be quite easily adapted to support voice, video or data or indeed all three at once. It is being used extensively as a medium through which to deliver broadband internet services to difficult to reach areas within developed countries not to mention those with a less ubiquitous infrastructure. The frequencies used for satcom are selected specifically because of their ability to resist absorption enabling them to cover the enormous distances required. On top of this it is impossible to ignore the enormous usage of satellite for broadcast media such as television broadcasting where the system is set up primarily for one way communication. In summary then satellite communications has and continues to deliver enormous benefits and has a number of key unique selling points.

 

The premise of this post however does not seek to confirm the obsolescence of Satcom but rather to examine its place in the ever changing telecommunications landscape. In today's world of wireless communications, high definition television and global access to the Internet, many people are unclear about the inherent advantages of satellite communications but they persist and are many. 

 

Cost Effective - The cost of satellite capacity doesn't increase with the number of users/receive sites, or with the distance between communication points. Whether crossing continents or staying local, satellite connection cost is distance insensitive. 

Global Availability - Communications satellites cover all land masses and there is growing capacity to serve maritime and even aeronautical markets. Customers in rural and remote regions around the world who cannot obtain high speed Internet access from a terrestrial provider are increasingly relying on satellite communications.

Superior Reliability - Satellite communications can operate independently from terrestrial infrastructure. When terrestrial outages occur from man-made and natural events, satellite connections remain operational.

Superior Performance - Satellite is unmatched for broadcast applications like television. For two-way IP networks, the speed, uniformity and end-to-end control of today's advanced satellite solutions are resulting in greater use of satellite by corporations, governments and consumers.

Immediacy and Scalability - Additional receive sites, or nodes on a network, can readily be added, sometimes within hours. All it takes is ground-based equipment. Satellite has proven its value as a provider of "instant infrastructure" for commercial, government and emergency relief communications.

Versatility and More - Satellites effectively support on a global basis all forms of communications ranging from simple point-of-sale validation to bandwidth intensive multimedia applications. Satellite solutions are highly flexible and can operate independently or as part of a larger network.

 

As we move froward and the need for ubiquitous communications becomes ever more embedded into the fabric of our lives, satellite communication will move into a golden age. Techniques and mechanisms with which to leverage the spacecraft as a communications platform are continually evolving and it is this swathe of new and exciting use cases that will take the communications satellite into the rest of the 21st century and beyond. 

Ingenious new techniques such as that envisioned by companies like Leosat and OneWeb demonstrate that the traditional paradigm of teleport-satellite-teleport communications is no longer de rigeur. As new business models seek to create optical meshed networks in the sky, new uses continue to emerge. Such networks will ultimately become the fastest communication links for distances over 10,000 miles because light travels faster through a vacuum than it does through glass. For applications which need to shave every possible fraction of a second from network delays (and there are many) these new networks will surpass the existing terrestrial networks no matter how few routed hops are required. The high speed world of financial algo trading, where microseconds cost millions will quickly move to these types of networks once they reach production.

As we move slowly away from the turn of the 21st century some may have expected that satellite communication may have been headed for its swansong given the ubiquity and reach of terrestrial networks. I'd appreciate your thoughts in the comments section below as to what the future may hold for satellite communication or indeed perhaps more broadly for spacecraft communication. I think its fair to say that reports of its demise have been greatly exaggerated.

Continue reading
2447 Hits
0 Comments

Enhancing Oil,Gas and Power Operations - SCADA via Rustyice Satellite Solutions

Oil and gas operations are located in unforgiving environments, from the blistering cold of the arctic to the scorching heat of the deserts and the storming conditions out on the open sea. To sustain secure operating conditions in these remote areas, reliable communication is as vital to the end-user as the umbilical cord is to an unborn child.

 

Supervisory Control And Data Acquisition

Supervisory control and data acquisition (SCADA) is a unique aspect of oil, gas and power distribution operations in that it does not entail communication between people, but between machines, also known as machine–machine (M2M).

SCADA describes a computer based system that manages mission critical process applications on the ‘factory floor’. These applications are frequently critical for health, safety and the environment.

The term telemetry is often used in combination with SCADA. Telemetry describes the process of collating data and performing remotely controlled actions via a suitable transmission media. In the context of this article, the telemetry media is a satellite communications solution.

SCADA in Oil, Gas and Power Distribution Operations

SCADA is not limited to a particular aspect of these types of operations. In the Oil and Gas industry, SCADA applications can be found in upstream areas such as well monitoring, downstream in areas such as pipeline operations, in trade by managing the fiscal metering/custody transfer operations and logistics in applications such as inventory management of tank storage facilities. SCADA systems in the Power Distribution industry use RTUs and PLCs to perform the majority of on-site control. The RTU or PLC acquires the site data, which includes meter readings, pressure, voltage, or other equipment status, then performs local control and transfers the data to the central SCADA system. However, when comparing and specifying a solution for challenging SCADA environments, RTU and PLC-based systems are not equal.

PLC Systems are Sub-Optimal for Complex SCADA Systems

Originally designed to replace relay logic, PLCs acquire analog and/or digital data through input modules, and execute a program loop while scanning the inputs and taking actions based on these inputs. PLCs perform well in sequential logic control applications with high discrete I/O data counts, but suffer from overly specialized design, which results in limited CPU performance, inadequate communication flexibility, and lack of easy scalability when it comes to adding future requirements other than I/O.
With the rapid expansion of remote site monitoring and control, three critical industry business trends have recently come into focus:

• System performance and intelligence – Process automation improves efficiency, plant safety, and reduces labor costs. However, complex processes like AGA gas flow calculations and high-resolution event capture in electric utility applications require very high performance and system-level intelligence. The reality is that even high-performance PLCs cannot meet all these expectations.

• Communication flexibility – Redundant communication links between remote systems and the central SCADA application form the basis of a reliable, secure, and safe enterprise. Power routing automation in electric applications, water distribution, warning systems, and oil and gas processes all require unique communication mediums including slow dial-up phone lines, medium speed RF, and broadband wired/wireless IP.

• Configurability and reduced costs – Although process monitoring and control are well defined and understood within many industries, the quest for flexibility and reduced Total Cost of Ownership (TCO) remains challenging. In the past, proprietary PLC units customized with third party components filled the niche, but suffered from lack of configurability and higher maintenance costs than fully integrated units. Today, businesses look for complete modular off-the shelf systems that yield high configurability with a significant improvement in TCO.

At the technical level, several requirements currently influence the SCADA specification process:
• Local intelligence and processing – High processing throughput, 64 bit CPUs with expanded memory for user applications and logging with support for highly complex control routines.

• High-speed communication ports – Monitoring large numbers of events requires systems that support multiple RS232/485 connections running at 230/460 kb/s and multiple Ethernet ports with 10/100 Mb/s capability.

• High-density, fast, and highly accurate I/O modules Hardware that implements 12.5 kHz input counters with 16-bit analog inputs and 14-bit analog outputs for improved accuracy.

• Broadband wireless and wired IP communications. Recent innovations in IP devices demands reliable connectivity to local IEDs (Intelligent Electronic Devices) as well as emerging communication network standards.

• Strict adherence to open standard industry protocols including Modbus, DNP3, and DF-1 on serial and TCP/IP ports

• Robust protocols for support of mixed communication environments.

• Protection of critical infrastructure – Enhanced security such as password-protected programming, over the air encryption, authentication, and IP firewall capability.

Selecting a Satellite Communication Solution – Factors to Consider

Security

When selecting a satellite communications solution, there are numerous factors that must be considered. Enterprise applications like e-mail, Internet access, telephony, videoconferencing, etc. frequently tie into public communications infrastructure. Due to security and reliability considerations it is considered best practice to isolate mission critical SCADA communications infrastructure from public networks.

The Rustyice solution is a dedicated satellite communications network solution tailored for the SCADA applications environment. By virtue of system design, our solution offers greater security against hacker attacks and virus infestation which mainly target computers that are connected to the Internet and are running office applications.

Reliability

Due to the critical nature of most SCADA operations, a reliable communication solution is of utmost importance. The satellite communications industry is mature with a proven track record. Satellite transponder availability is typically in the 99.99 percentile range, a number far superior to that of terrestrial networks. To build on this strength, our solution utilises a miniature satellite hub that is deployed at the end-users SCADA control centre. Data to/from the remote terminal units (RTUs) are piped directly into the SCADA system. There is no vulnerable terrestrial back-haul from a communication service providers facility, which can cause the entire network to crash if cut during public works, i.e. digging.

To increase the reliability of the hub, it is frequently deployed in a redundant/load sharing configuration. This ensures that the hub is available more than 100% of the time, making it far from the weakest link in the communication chain.

Types of Connectivity

Contrary to enterprise-related communications which take place randomly, SCADA communication is quite predictable. It is a continuous process, where the SCADA application polls the RTUs at regular intervals. The outgoing poll request is a short datagram (packet) containing as few as 10 bytes. The returned data from the RTUs are also in a datagram format with the message size being from 10 bytes to 250 bytes. One could easily assume that a satellite solution based upon dial-up connectivity such as Inmarsat, Iridium or Globalstar would be ideal for this application environment. Since SCADA is not just data collection, but also entails control (which at times can be of an emergency nature), you simply cannot wait for the system to encounter a busy connection. What is needed is a system that provides an ‘always on’ type of connection, commonly referred to as leased line connectivity.

A Rustyice solution supports both circuit switched (leased line and multi drop) and packet switched (TCP/IP and X.25) applications concurrently.

Continue reading
420 Hits
0 Comments

The Chirpsounder / Ionosonde


Anybody who has ever set up a working international HF link will know it can be a tricky business. You see there's a pesky movable thing called the ionosphere which is pretty fundamental to the whole business.
Communicating with a point halfway round the planet using HF is like trying to play that old 70's children's game called Rebound. Since radio links are usually close to or distinctly line of sight links, communicating with a point on the other side of a sphere would seem like a fairly insurmountable problem. I'd think the first time this problem was solved using the ionosphere it was probably an accident caused by some early radio pioneers receiving signals for their fellow pioneers some way round the planet and beginning to wonder why and how it was happening.

The reason it was and does happen is because of a thin layer of the Earths atmosphere called the ionosphere. The ionosphere is a region of the upper atmosphere, from about 85 km (53 mi) to 600 km (370 mi) altitude, and includes the thermosphere and parts of the mesosphere and exosphere. It is distinguished because it is ionized by solar radiation. It plays an important part in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. This is the reason we as Telecommunications Engineers are interested in it.

The ionosphere is a layer of electrons and electrically charged atoms and molecules in the upper Earths atmosphere, ranging from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It exists because of the Sun's ultraviolet radiation which causes these gases to ionise and develop a charge. Because of the boundary between this layer and the relatively uncharged layer below, wave diffraction occurs. This phenomenon takes place at different incidences with different frequencies and, with clever utilisation of this property, the ionosphere can be utilized to "bounce" a transmitted signal down to the ground. Transcontinental HF-connections can rely on up to 5 of these bounces, or hops.

It is the process of determining the appropriate frequencies and their respective bounce points around the planet that is the focus of this post. The applied physics involved in this refraction are beyond the scope of this post but, in a nutshell, what they do produce is a spread of frequencies which bounce at different incident angles to the boundary layer such that different distant points on the surface of the planet can be reached when the bounced radio wave returns to the ground. This is shown more clearly in the diagram on the left.

Unfortunately, it is not quite as straightforward as the diagram above suggests as the strength and location of the ionosphere is always changing as day becomes night and also as cosmic radiation from the Sun changes over time. This presents those wishing to use this phenomenon with the constant problem of determining which frequencies are workable and usable between any two given points on the Earth.

The problem of determining these usable frequencies was the driving force behind the invention of the Chirpsounder (also known as an Ionosonde). The Chirpsounder, or rather a pair of Chirpsounders operate in tandem using a Chirp transmitter in one location and a Chirp receiver in another. The job of the transmitter is to transmit a sweep of radio output from one predetermined frequency to another over a given amount of time. A Chirp receiver situated close to the transmitter would if synchronised to match the sweep timings, receive all of the sweeps from the beginning to the end but the same Chirp receiver placed two thousand miles away over the Earths horizon may not fare so well. This is where the technology really comes into its own.


When a Tx/Rx pair of Chirpsounders are running a synchronised sweep between two distant locations, the receiver will receive from the transmitter only during those parts of the sweep that are conducive to a working link between the two. This information is gathered by the Chirp receiver and is used to provide the user with a graph showing frequency on the x-axis and receive delay on the y-axis. There will also often be a display of receive signal strength incorporated in the output. A sample Chirpsounder output is shown on the right.

As can be seen, there are a number of elements shown on the trace and each of these represents a successful reception of the signal from the transmitter. The more solid the line, the more reliable the link and this information, when used in parallel with the received power information can enable telecommunications professionals to choose the most appropriate frequency. Once the decision had been made the operational transmitters and receiver could be set appropriately and the operational radio channel could begin to pass its traffic using the ionospheric bounce. Quite amazing really.

Continue reading
1271 Hits
0 Comments

Network Functions Virtualization on the Software-Defined Network

banner_inter_urbanIn the modern Telecom industry, driven by the fast changing demands that a connected society makes of it, a huge number of new applications are emerging such as IPX, eHealth, Smart Cities and the Internet of Things. Each of these emergent applications requires new customisations of the ecosystem to manage traffic through a wide variety of service providers.

This is the core challenge faced by todays infrastructure, but we must also not overlook the fact that to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. Service providers are placed in the awkward space between the end users and the emergent technologies but it is the fact that these technologies and their business models are often emerging on a month to month basis that presents the greatest challenge.

If we consider all the IT assets ISP's and Telcos have at their Points of Presence it represents a significant and very much underused resource. The holy grail for many of these organisation is to be able to unlock all of this storage and computing capacity, and turn it into a virtualized resources. This strategy opens up some intriguing possibilities such as bringing remote resource to bear during times of heavy compute load at a specific locale from areas where capacity is less constrained. In infrastructure terms, this cloud-oriented world of adding new network capacity whenever and wherever it is needed is a matter of merely sliding more cards into racks or deploying new software which greatly lowers the cost of scaling the network hardware by commoditising the components used to build up a service providers infrastructure.

Agility of services is the key to this new world order where services can be created orders of magnitude more quickly than was traditionally the case. In this new model the division between content providers and service providers becomes blurred. The flexibility to manage this dynamism is the key to the industry being able to meet the demands that the connected society will increasingly place on it and it will be those players who are able to manage this balancing act most effectively that will come out on top.

This is where NFV comes in. The advent of Network Function Virtualization, or NFV, has strong parallels to those developments in the computing world that gave us the cloud, big data and other commodity computing advances. Using capacity where and when it is required with a lot less visibility into the physical location of the network than is needed currently presents a whole new set of unique challenges. As computing hardware has developed and become more capable, a greater level of software complexity has taken place by its side.

The management of NFV will be critical to its operation, and the way that end user functionality is moving to the cloud today represents a sneak preview of this. We’re seeing a preview of that as computing scales to the cloud. A lot of careful design consideration will be required and service providers need to begin adapting their infrastructure today to accommodate this future virtualization.

Closely related and indeed an enabler to the trend of NFV is the Software-Defined Network, or SDN. The SDN, or Software Defined Networking can provide improved network efficiency and better cost savings, allowing the network to follow-the-sun, turning down servers or network hardware when the load lightens, or even turning them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10 p.m. to 6 a.m., you will see a significant decrease in the cost of electricity and cooling.

The continued integration of technologies such as Openflow into the latest and greatest network management implementations will further enable this trend as we increasingly see these OSS and BSS systems seek to pre-empt their traditional reactive mechanisms by looking farther up the business model in order to steal vital time with which to maximise the effectiveness of their influence and ultimately maximise the value add of their managed virtualised domains.

Continue reading
431 Hits
0 Comments

Could ants power Web3.0 to new heights? OSPF v's ANTS

Having recently completed my latest M.Eng block on the subject of "Natural and Artificial Intelligence", I became aware of advances made in the recent decade towards a new paradigm of network traffic engineering that was being researched. This new model turns its back on traditional destination based solutions, (OSPF, EIGRP, MPLS) to the combinatorial problem of decision making in network routing  favouring instead a constructive greedy heuristic which uses stochastic combinatorial optimisation. Put in more accessible terms, it leverages the emergent ability of sytems comprised of quite basic autonomous elements working together, to perform a variety of complicated tasks with great reliability and consistency.

In 1986, the computer scientist Craig Reynolds set out to investigate this phenomenon through computer simulation. The mystery and beauty of a flock or swarm is perhaps best described in the opening words of his classic 1986 paper on the subject:

The motion of a flock of birds is one of nature’s delights. Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate. A flock ... is made up of discrete birds yet overall motion seems fluid; it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence dicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world.

An analogy with the way ant colonies function has suggested that the emergent behaviour of ant colonies to reliably and consistently optimise paths could be leveraged to enhance the way that the combinatorial optimisation problem of complex network path selection is solved.

The fundamental difference between the modelling of a complex telecommunications network and more commonplace problems of combinatorial optimisation such as the travelling salesman problem is that of the dynamic nature of the state at any given moment of a network such as the internet. For example, in the TSP the towns, the routes between them and the associated distances don’t change. However, network routing is a dynamic problem. It is dynamic in space, because the shape of the network – its topology – may change: switches and nodes may break down and new ones may come on line. But the problem is also dynamic in time, and quite unpredictably so. The amount of network traffic will vary constantly: some switches may become overloaded, there may be local bursts of activity that make parts of the network very slow, and so on. So network routing is a very difficult problem of dynamic optimisation. Finding fast, efficent and intelligent routing algorithms is a major headache for telcommunications engineers.

So how you may ask, could ants help here? Individual ants are behaviourally very unsophisticated insects. They have a very limited memory and exhibit individual behaviour that appears to have a large random component. Acting as a collective however, ants manage to perform a variety of complicated tasks with great reliability and consistency, for example, finding the shortest routes from their nest to a food source.

These behaviours emerge from the interactions between large numbers of individual ants and their environment. In many cases, the principle of stigmergy is used. Stigmergy is a form of indirect communication through the environment. Like other insects, ants typically produce specific actions in response to specific local environmental stimuli, rather than as part of the execution of some central plan. If an ant's action changes the local environment in a way that affects one of these specific stimuli, this will influence the subsequent actions of ants at that location. The environmental change may take either of two distinct forms. In the first, the physical characteristics may be changed as a result of carrying out some task-related action, such as digging a hole, or adding a ball of mud to a growing structure. The subsequent perception of the changed environment may cause the next ant to enlarge the hole, or deposit its ball of mud on top of the previous ball. In this type of stigmergy, the cumulative effects of these local task-related changes can guide the growth of a complex structure. This type of influence has been called sematectonic. In the second form, the environment is changed by depositing something which makes no direct contribution to the task, but is used solely to influence subsequent behaviour which is task related. This sign-based stigmergy has been highly developed by ants and other exclusively social insects, which use a variety of highly specific volatile hormones, or pheromones, to provide a sophisticated signalling system. It is primarily this second mechanism of sign based sigmergy that has been successfully simulated with computer models and applied as a model to a system of network traffic engineering.

In the traditional network model, packets move around the network completely deterministically. A packet arriving at a given node is routed by the device which simply consults the routing table and takes the optimum path based on its destination. There is no element of probability as the values in the routing table represent not probabilities, but the relative desirability of moving to other nodes.

In the ant colony optimisation model, virtual ants also move around the network, their task being to constantly adjust the routing tables according to the latest information about network conditions. For an ant, the values in the table are probabilities that their next move will be to a certain node.The progress of an ant around the network is governed by the following informal rules:

    • Ants start at random nodes.

 

    • They move around the network from node to node, using the routing table at each node as a guide to which link to cross next.

 

    • As it explores, an ant ages, the age of each individual being related to the length of time elapsed since it set out from its source. However, an ant that finds itself at a congested node is delayed, and thus made to age faster than ants moving through less choked areas.

 

    • As an ant crosses a link between two nodes, it deposits pheromone however, it leaves it not on the link itself, but on the entry for that link in the routing table of the node it left. Other 'pheromone' values in that column of the nodes routing table are decreased, in a process analogous to pheromone decay.

 

    • When an ant reaches its final destination it is presumed to have died and is deleted from the system.R.I.P.



Testing the ant colony optimisation system, and measuring its performance against that of a number of other well-known routing techniques produced good results and the system outperformed all of the established mechanisms however there are potential problems of the kind that constantly plague all dynamic optimisation algorithms. The most significant problem is that, after a long period of stability and equilibrium, the ants will have become locked into their accustomed routes. They become unable to break out of these patterns to explore new routes capable of meeting new conditions which could exist if a sudden change to the networks conditions were to take place. This can be mitigated however in the same way that evolutionary computation introduces mutation to fully explore new possibilities by means of the introduction of an element of purely random behaviour to the ant.

'Ant net' routing has been tested on models of US and Japanese communications networks, using a variety of different possible traffic patterns. The algorithm worked at least as well as, and in some cases much better than, four of the best-performing conventional routing algorithms. Its results were even comparable to those of an idealised ‘daemon’ algorithm, with instantaneous and complete knowledge of the current state of the network.

It would seem we have not heard the last of these routing antics.... (sorry, couldnt resist).

Continue reading
1210 Hits
0 Comments