Pricing Table Particle

Quickly drive clicks-and-mortar catalysts for change
  • Basic
  • Standard Compliant Channels
  • $50
  • Completely synergize resource taxing relationships via premier market
  • 1 GB of space
  • Support at $25/hour
  • Sign Up
  • Premium
  • Standard Compliant Channels
  • $100
  • Completely synergize resource taxing relationships via premier market
  • 10 GB of space
  • Support at $15/hour
  • Sign Up
  • Platinum
  • Standard Compliant Channels
  • $250
  • Completely synergize resource taxing relationships via premier market
  • 30 GB of space
  • Support at $5/hour
  • Sign Up

Isn't Satellite Communication Old School Now

Space travel has and continues to fascinate us. As humans it will always be our intrinsic instinct to explore and discover whatever lies over the next horizon. Such was the motivation for the space race which ultimately provided the world with satellite communications amongst many other things. When we look back at the grainy pictures from that febrile time in history however what we see is a world which looks very different to that of today. Indeed most of the sci-fi of the 1960's was set around about now. As people of their future looking back it all seems rather quaint to us but the benefits we have enjoyed from satellite communications have been many and varied. Since the launch of Telstar, satellite communications has enabled us to beam the finger of mass communications to every corner of the planet

The above notwithstanding, our world today is criss-crossed by undersea cables between every continent and across every sea. Satellite communication (or SATCOM as we will refer to it moving forward) would seem to no longer be necessary... or is it? Lets take a look at the benefits it brought to us at its genesis.

Satcom is the ultimate mobile technology. It provides us with the possibility for cable free communications across the whole footprint of a beam. For some types of spacecraft such a footprint can cover many hundreds of thousands of square miles from only one beam. A single spacecraft can support many beams. That we can utilise this technology anywhere within the beam is such an enormous asset that it completely revolutionises our activity in the remotest areas of the planet. It is now possible to call your mum from a rowing boat in the middle of the atlantic ocean on mothers day, or indeed on any day. Such is the ease of use that is possible using technology no more byzantine than a satellite phone. 

Satellite comms is also relatively cheap although the person who owns the satellite phone may ask you to keep it brief whilst calling your mum. Mobile terminals are however cheap and cheerful when examined in the context of global communication methods. They can also be quite easily adapted to support voice, video or data or indeed all three at once. It is being used extensively as a medium through which to deliver broadband internet services to difficult to reach areas within developed countries not to mention those with a less ubiquitous infrastructure. The frequencies used for satcom are selected specifically because of their ability to resist absorption enabling them to cover the enormous distances required. On top of this it is impossible to ignore the enormous usage of satellite for broadcast media such as television broadcasting where the system is set up primarily for one way communication. In summary then satellite communications has and continues to deliver enormous benefits and has a number of key unique selling points.

 

The premise of this post however does not seek to confirm the obsolescence of Satcom but rather to examine its place in the ever changing telecommunications landscape. In today's world of wireless communications, high definition television and global access to the Internet, many people are unclear about the inherent advantages of satellite communications but they persist and are many. 

 

Cost Effective - The cost of satellite capacity doesn't increase with the number of users/receive sites, or with the distance between communication points. Whether crossing continents or staying local, satellite connection cost is distance insensitive. 

Global Availability - Communications satellites cover all land masses and there is growing capacity to serve maritime and even aeronautical markets. Customers in rural and remote regions around the world who cannot obtain high speed Internet access from a terrestrial provider are increasingly relying on satellite communications.

Superior Reliability - Satellite communications can operate independently from terrestrial infrastructure. When terrestrial outages occur from man-made and natural events, satellite connections remain operational.

Superior Performance - Satellite is unmatched for broadcast applications like television. For two-way IP networks, the speed, uniformity and end-to-end control of today's advanced satellite solutions are resulting in greater use of satellite by corporations, governments and consumers.

Immediacy and Scalability - Additional receive sites, or nodes on a network, can readily be added, sometimes within hours. All it takes is ground-based equipment. Satellite has proven its value as a provider of "instant infrastructure" for commercial, government and emergency relief communications.

Versatility and More - Satellites effectively support on a global basis all forms of communications ranging from simple point-of-sale validation to bandwidth intensive multimedia applications. Satellite solutions are highly flexible and can operate independently or as part of a larger network.

 

As we move froward and the need for ubiquitous communications becomes ever more embedded into the fabric of our lives, satellite communication will move into a golden age. Techniques and mechanisms with which to leverage the spacecraft as a communications platform are continually evolving and it is this swathe of new and exciting use cases that will take the communications satellite into the rest of the 21st century and beyond. 

Ingenious new techniques such as that envisioned by companies like Leosat and OneWeb demonstrate that the traditional paradigm of teleport-satellite-teleport communications is no longer de rigeur. As new business models seek to create optical meshed networks in the sky, new uses continue to emerge. Such networks will ultimately become the fastest communication links for distances over 10,000 miles because light travels faster through a vacuum than it does through glass. For applications which need to shave every possible fraction of a second from network delays (and there are many) these new networks will surpass the existing terrestrial networks no matter how few routed hops are required. The high speed world of financial algo trading, where microseconds cost millions will quickly move to these types of networks once they reach production.

As we move slowly away from the turn of the 21st century some may have expected that satellite communication may have been headed for its swansong given the ubiquity and reach of terrestrial networks. I'd appreciate your thoughts in the comments section below as to what the future may hold for satellite communication or indeed perhaps more broadly for spacecraft communication. I think its fair to say that reports of its demise have been greatly exaggerated.

Continue reading
673 Hits
0 Comments

The Chirpsounder / Ionosonde

ionosphereAnybody who has ever set up a working international HF link will know it can be a tricky business. You see there's a pesky movable thing called the ionosphere which is pretty fundamental to the whole business.
Communicating with a point halfway round the planet using HF is like trying to play that old 70's children's game called Rebound. Since radio links are usually close to or distinctly line of sight links, communicating with a point on the other side of a sphere would seem like a fairly insurmountable problem. I'd think the first time this problem was solved using the ionosphere it was probably an accident caused by some early radio pioneers receiving signals for their fellow pioneers some way round the planet and beginning to wonder why and how it was happening.

The reason it was and does happen is because of a thin layer of the Earths atmosphere called the ionosphere. The ionosphere is a region of the upper atmosphere, from about 85 km (53 mi) to 600 km (370 mi) altitude, and includes the thermosphere and parts of the mesosphere and exosphere. It is distinguished because it is ionized by solar radiation. It plays an important part in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. This is the reason we as Telecommunications Engineers are interested in it.

The ionosphere is a layer of electrons and electrically charged atoms and molecules in the upper Earths atmosphere, ranging from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It exists because of the Suns ultraviolet radiation which causes these gases to ionise and develop a charge. Because of the boundary between this layer and the relatively uncharged layer below, wave diffraction occurs. This phenomenon takes place at different incidences with different frequencies and, with clever utilisation of this property, the ionosphere can be utilized to "bounce" a transmitted signal down to the ground. Transcontinental HF-connections can rely on up to 5 of these bounces, or hops.

aerials-takeoff-anglesIt is the process of determining the appropriate frequencies and their respective bounce points around the planet that is the focus of this post. The applied physics involved in this refraction are beyond the scope of this post but, in a nutshell, what they do produce is a spread of frequencies which bounce at different incident angles to the boundary layer such that different distant points on the surface of the planet can be reached when the bounced radio wave returns to the ground. This is shown more clearly in the diagram on the left.

Unfortunately it is not quite as straightforward as the diagram above suggests as the strength and location of the ionosphere is always changing as day becomes night and also as cosmic radiation from the Sun changes over time. This presents those wishing to use this phenomenon with the constant problem of determining which frequencies are workable and usable between any two given points on the Earth.

The problem of determining these usable frequencies was the driving force behind the invention of the Chirpsounder (also known as an Ionosonde). The Chirpsounder, or rather a pair of Chirpsounders operate in tandem using a Chirp transmitter in one location and a Chirp receiver in another. The job of the transmitter is to transmit a sweep of radio output from one predetermined frequency to another over a given amount of time. A Chirp receiver situated close to the transmitter would, if synchronised to match the sweep timings, receive all of the sweep from the beginning to the end but the same Chirp receiver placed two thousand miles away over the Earths horizon may not fare so well. This is where the technology really comes into its own.

Screenshot_3When a Tx/Rx pair of Chirpsounders are running a synchronised sweep between two distant locations, the receiver will receive from the transmitter only during those parts of the sweep that are conducive to a working link between the two. This information is gathered by the Chirp receiver and is used to provide the user with a graph showing frequency on the x-axis and receive delay on the y-axis. There will also often be a display of receive signal strength incorporated in the output. A sample Chirpsounder output is shown on the right.

As can be seen there are a number of elements shown on the trace and each of these represents a successful reception of the signal from the transmitter. The more solid the line, the more reliable the link and this information, when used in parallel with the received power information can enable telecommunications professionals to choose the most appropriate frequency. Once the decision had been made the operational transmitters and receiver could be set appropriately and the operational radio channel could begin to pass its traffic using the ionospheric bounce. Quite amazing really.

Continue reading
176 Hits
0 Comments

Could ants power Web3.0 to new heights? OSPF v's ANTS

Having recently completed my latest M.Eng block on the subject of "Natural and Artificial Intelligence", I became aware of advances made in the recent decade towards a new paradigm of network traffic engineering that was being researched. This new model turns its back on traditional destination based solutions, (OSPF, EIGRP, MPLS) to the combinatorial problem of decision making in network routing  favouring instead a constructive greedy heuristic which uses stochastic combinatorial optimisation. Put in more accessible terms, it leverages the emergent ability of sytems comprised of quite basic autonomous elements working together, to perform a variety of complicated tasks with great reliability and consistency.

In 1986, the computer scientist Craig Reynolds set out to investigate this phenomenon through computer simulation. The mystery and beauty of a flock or swarm is perhaps best described in the opening words of his classic 1986 paper on the subject:

The motion of a flock of birds is one of nature’s delights. Flocks and related synchronized group behaviors such as schools of fish or herds of land animals are both beautiful to watch and intriguing to contemplate. A flock ... is made up of discrete birds yet overall motion seems fluid; it is simple in concept yet is so visually complex, it seems randomly arrayed and yet is magnificently synchronized. Perhaps most puzzling is the strong impression of intentional, centralized control. Yet all evidence dicates that flock motion must be merely the aggregate result of the actions of individual animals, each acting solely on the basis of its own local perception of the world.

An analogy with the way ant colonies function has suggested that the emergent behaviour of ant colonies to reliably and consistently optimise paths could be leveraged to enhance the way that the combinatorial optimisation problem of complex network path selection is solved.

The fundamental difference between the modelling of a complex telecommunications network and more commonplace problems of combinatorial optimisation such as the travelling salesman problem is that of the dynamic nature of the state at any given moment of a network such as the internet. For example, in the TSP the towns, the routes between them and the associated distances don’t change. However, network routing is a dynamic problem. It is dynamic in space, because the shape of the network – its topology – may change: switches and nodes may break down and new ones may come on line. But the problem is also dynamic in time, and quite unpredictably so. The amount of network traffic will vary constantly: some switches may become overloaded, there may be local bursts of activity that make parts of the network very slow, and so on. So network routing is a very difficult problem of dynamic optimisation. Finding fast, efficent and intelligent routing algorithms is a major headache for telcommunications engineers.

So how you may ask, could ants help here? Individual ants are behaviourally very unsophisticated insects. They have a very limited memory and exhibit individual behaviour that appears to have a large random component. Acting as a collective however, ants manage to perform a variety of complicated tasks with great reliability and consistency, for example, finding the shortest routes from their nest to a food source.

These behaviours emerge from the interactions between large numbers of individual ants and their environment. In many cases, the principle of stigmergy is used. Stigmergy is a form of indirect communication through the environment. Like other insects, ants typically produce specific actions in response to specific local environmental stimuli, rather than as part of the execution of some central plan. If an ant's action changes the local environment in a way that affects one of these specific stimuli, this will influence the subsequent actions of ants at that location. The environmental change may take either of two distinct forms. In the first, the physical characteristics may be changed as a result of carrying out some task-related action, such as digging a hole, or adding a ball of mud to a growing structure. The subsequent perception of the changed environment may cause the next ant to enlarge the hole, or deposit its ball of mud on top of the previous ball. In this type of stigmergy, the cumulative effects of these local task-related changes can guide the growth of a complex structure. This type of influence has been called sematectonic. In the second form, the environment is changed by depositing something which makes no direct contribution to the task, but is used solely to influence subsequent behaviour which is task related. This sign-based stigmergy has been highly developed by ants and other exclusively social insects, which use a variety of highly specific volatile hormones, or pheromones, to provide a sophisticated signalling system. It is primarily this second mechanism of sign based sigmergy that has been successfully simulated with computer models and applied as a model to a system of network traffic engineering.

In the traditional network model, packets move around the network completely deterministically. A packet arriving at a given node is routed by the device which simply consults the routing table and takes the optimum path based on its destination. There is no element of probability as the values in the routing table represent not probabilities, but the relative desirability of moving to other nodes.

In the ant colony optimisation model, virtual ants also move around the network, their task being to constantly adjust the routing tables according to the latest information about network conditions. For an ant, the values in the table are probabilities that their next move will be to a certain node.The progress of an ant around the network is governed by the following informal rules:

    • Ants start at random nodes.

 

    • They move around the network from node to node, using the routing table at each node as a guide to which link to cross next.

 

    • As it explores, an ant ages, the age of each individual being related to the length of time elapsed since it set out from its source. However, an ant that finds itself at a congested node is delayed, and thus made to age faster than ants moving through less choked areas.

 

    • As an ant crosses a link between two nodes, it deposits pheromone however, it leaves it not on the link itself, but on the entry for that link in the routing table of the node it left. Other 'pheromone' values in that column of the nodes routing table are decreased, in a process analogous to pheromone decay.

 

    • When an ant reaches its final destination it is presumed to have died and is deleted from the system.R.I.P.



Testing the ant colony optimisation system, and measuring its performance against that of a number of other well-known routing techniques produced good results and the system outperformed all of the established mechanisms however there are potential problems of the kind that constantly plague all dynamic optimisation algorithms. The most significant problem is that, after a long period of stability and equilibrium, the ants will have become locked into their accustomed routes. They become unable to break out of these patterns to explore new routes capable of meeting new conditions which could exist if a sudden change to the networks conditions were to take place. This can be mitigated however in the same way that evolutionary computation introduces mutation to fully explore new possibilities by means of the introduction of an element of purely random behaviour to the ant.

'Ant net' routing has been tested on models of US and Japanese communications networks, using a variety of different possible traffic patterns. The algorithm worked at least as well as, and in some cases much better than, four of the best-performing conventional routing algorithms. Its results were even comparable to those of an idealised ‘daemon’ algorithm, with instantaneous and complete knowledge of the current state of the network.

It would seem we have not heard the last of these routing antics.... (sorry, couldnt resist).

Continue reading
170 Hits
0 Comments

Forget about 3G, here comes 4G (LTE)

The LTE hits just keep coming: Chunghwa Telecom said this week that it plans to start testing LTE with Ericsson gear, in northern Taiwan. Meanwhile, in Japan, Ericsson customer NTT DoCoMo has started its 4G upgrade. It plans to launch commercially in 2010.

Along with Cisco's recently approved purchase of Starent Networks, these are the latest moves in a market that is rapidly heating up, putting a spotlight on the opportunities for infrastructure vendors. Ericsson has been in the spotlight all week, since Swedish incumbent TeliaSonera launched the first commercial LTE network on Monday, using equipment from Ericsson as well as Huawei.

It’s likely that an infrastructure vendor battle will soon heat up as more trials get underway. Huawei is looking like a big threat to the Tier 1 vendors; it’s signed on to 25 trials and deployments worldwide, it says, including plans to integrate Belgium incumbent Belgacom’s GSM, HSPA and future LTE networks in a converged radio access network and all-IP core. The Chinese vendor will also replace Belgacom’s existing RAN supplier, which happens to be Nokia Siemens Networks.

Also, Telecom Italia said it is working with Huawei for an LTE trial in Turin.

That said, NSN and Alcatel-Lucent are determined to also be a part of the LTE story. NSN recently announced that global operator Telefónica will run a six-month 4G trial in the Czech Republic on NSN’s end-to-end LTE solution. Meanwhile, it also has been tackling the voice-over-LTE goal, and completed successful IMS-compliant voice calls and SMS messaging using 3GPP-standardized LTE equipment, and says it will also soon conduct VoLTE test calls with a fully implemented IMS system.

Not to be outdone, Alcatel-Lucent said that it too has called and texted across standard LTE equipment, but using the interim standard from the 3GPP known as VoLGA.

The first carriers out of the gate after TeliaSonera with the 4G broadband technology – which promises 20mbps to 40mbps in throughput, initially – will likely be Verizon Wireless and NTT DoCoMo. Regional carriers MetroPCS and U.S. Cellular also have plans to deploy LTE next year, along with KDDI in Japan, and Tele2 and Telenor in Europe. AT&T and China Mobile are planning LTE rollouts for 2011. Most incumbents have LTE on their to-do list at some point, making for a rich new vein for infrastructure vendors to mine.

Some markets will be richer than others. "Spectrum availability is the primary factor impacting deployment plans," said senior ABI analyst Nadine Manjaro. "In countries where telecommunications regulators are making appropriate spectrum available, many operators have announced plans to launch LTE. These include the U.S., Sweden, China and others. Where no such spectrum allocations exist, operators are postponing LTE plans." The United Kingdom, surprise surprise, will likely be slower to roll out LTE because of spectrum availability.

Continue reading
216 Hits
0 Comments