Whispers & Screams
And Other Things

Teejays Guest

It's really nice to have another dog around the place again. Makes the prospect of getting a new full time member of the pack seem like such a nice prospect that can't come soon enough.

IMG_3851.JPG
Continue reading
1174 Hits
0 Comments

The Chirpsounder / Ionosonde


Anybody who has ever set up a working international HF link will know it can be a tricky business. You see there's a pesky movable thing called the ionosphere which is pretty fundamental to the whole business.
Communicating with a point halfway round the planet using HF is like trying to play that old 70's children's game called Rebound. Since radio links are usually close to or distinctly line of sight links, communicating with a point on the other side of a sphere would seem like a fairly insurmountable problem. I'd think the first time this problem was solved using the ionosphere it was probably an accident caused by some early radio pioneers receiving signals for their fellow pioneers some way round the planet and beginning to wonder why and how it was happening.

The reason it was and does happen is because of a thin layer of the Earths atmosphere called the ionosphere. The ionosphere is a region of the upper atmosphere, from about 85 km (53 mi) to 600 km (370 mi) altitude, and includes the thermosphere and parts of the mesosphere and exosphere. It is distinguished because it is ionized by solar radiation. It plays an important part in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. This is the reason we as Telecommunications Engineers are interested in it.

The ionosphere is a layer of electrons and electrically charged atoms and molecules in the upper Earths atmosphere, ranging from a height of about 50 km (31 mi) to more than 1,000 km (620 mi). It exists because of the Sun's ultraviolet radiation which causes these gases to ionise and develop a charge. Because of the boundary between this layer and the relatively uncharged layer below, wave diffraction occurs. This phenomenon takes place at different incidences with different frequencies and, with clever utilisation of this property, the ionosphere can be utilized to "bounce" a transmitted signal down to the ground. Transcontinental HF-connections can rely on up to 5 of these bounces, or hops.

It is the process of determining the appropriate frequencies and their respective bounce points around the planet that is the focus of this post. The applied physics involved in this refraction are beyond the scope of this post but, in a nutshell, what they do produce is a spread of frequencies which bounce at different incident angles to the boundary layer such that different distant points on the surface of the planet can be reached when the bounced radio wave returns to the ground. This is shown more clearly in the diagram on the left.

Unfortunately, it is not quite as straightforward as the diagram above suggests as the strength and location of the ionosphere is always changing as day becomes night and also as cosmic radiation from the Sun changes over time. This presents those wishing to use this phenomenon with the constant problem of determining which frequencies are workable and usable between any two given points on the Earth.

The problem of determining these usable frequencies was the driving force behind the invention of the Chirpsounder (also known as an Ionosonde). The Chirpsounder, or rather a pair of Chirpsounders operate in tandem using a Chirp transmitter in one location and a Chirp receiver in another. The job of the transmitter is to transmit a sweep of radio output from one predetermined frequency to another over a given amount of time. A Chirp receiver situated close to the transmitter would if synchronised to match the sweep timings, receive all of the sweeps from the beginning to the end but the same Chirp receiver placed two thousand miles away over the Earths horizon may not fare so well. This is where the technology really comes into its own.


When a Tx/Rx pair of Chirpsounders are running a synchronised sweep between two distant locations, the receiver will receive from the transmitter only during those parts of the sweep that are conducive to a working link between the two. This information is gathered by the Chirp receiver and is used to provide the user with a graph showing frequency on the x-axis and receive delay on the y-axis. There will also often be a display of receive signal strength incorporated in the output. A sample Chirpsounder output is shown on the right.

As can be seen, there are a number of elements shown on the trace and each of these represents a successful reception of the signal from the transmitter. The more solid the line, the more reliable the link and this information, when used in parallel with the received power information can enable telecommunications professionals to choose the most appropriate frequency. Once the decision had been made the operational transmitters and receiver could be set appropriately and the operational radio channel could begin to pass its traffic using the ionospheric bounce. Quite amazing really.

Continue reading
886 Hits
0 Comments

Configuring 3G Wireless WAN on Modular and Fixed ISRs (HWIC-3G-GSM, HWIC-3G-HSPA, PCEX-3G-HSPA-x)

Cisco Integrated Services Routers are branch routers which support the new paradigms of network traffic delivery in the cloud and on the move. They provide Internet connectivity to teleworkers, and minor sites supporting less than 20 users. They also support bridging and routing between the LAN and the WAN whilst providing many advanced features such as antivirus protection.

 

The Third Generation (3G) Wireless High-Speed WAN Interface Card (HWIC) is a multiband, multiservice WAN card for use over WCDMA Radio Access Networks (RAN).

 

Both the fixed and the modular 3G routers can be used as the primary WAN connectivity and as a backup for critical applications which require a fallback service. 3G WAN is supported on the following modular Cisco ISRs: 800, 1841, 1861, 2800 series, 3800 series, 1900, 2900 and 3900.

 

One of the first actions required will be to configure a new 3G HWIC data profile.

 

To configure your 3G HWIC data profile, you will need the following information from your service provider:

 

Username (if required by your carrier)

 

Password (if required by your carrier)

 

Access Point Name (APN)

 

Once obtained, we can begin to set up the 3G features on the equipment itself by following these procedures:




    1. Data Account Provisioning

 

    1. Data Call Setup

 

    1. Voice Initiated Data Callback or Remote Dial-in (Optional)



In order to provision our data account we must have first obtained the key information from the service provider. The next priority is to ensure that we have the necessary service availability and signal strength in order for the connection to work. We need to use the following commands to examine the services available on the 3G network at the location in question.

    1. show cellular network - This displays info about the carrier network.

 

    1. show cellular radio - This shows the signal strength. We are looking for RSSI of -90dBm for a steady and reliable connection.

 

    1. show cellular security - This shows SIM lock status and modem lock status.



Once we have determined that the conditions are favourable we can go ahead and set up a modem data profile. To examine the existing data profiles configured on the equipment use the command show cellular profile. 

Assuming the profile we need is not already created we will need to go ahead and create it. In order to do this we use the command cellular gsm profile create . The syntax required is as follows:

cellular <slot/wic/port> gsm profile create <profile number> <apn> <authentication> <username> <password>

for example

cellular 0/0/0 gsm profile create 1 vodafone.apn chap 3guser 3guserpass

The data profile parameters are as follows:

    • apn - Access Point Name - This must be obtained from the service provider

 

    • authentication - Usually chap or pap

 

    • username - provided by service provider

 

    • password - provided by service provider



Once the data profile is properly set we then look to set up the parameters for the correct operation of the data call.

Firstly it is necessary to configure the cellular interface. The steps in summary are as follows:

1. configure terminal

 

2. interface cellular <slot/wic/port>

 

3. encapsulation ppp

 

4. ppp chap hostname <host>

 

5. ppp chap password 0 <password>

 

6. asynchronous mode interactive

 

7. ip address negotiated

 

The authentication parameters used here must be the same as those configured under the earlier GSM profile.

 

Once this is configured we need only configure the dialer and the steps for doing this in summary are as follows:

 

1. configure terminal

 

2. interface cellular <slot/wic/port>

 

3. dialer in-band

 

4. dialer idle-timeout <seconds>

 

5. dialer string <string>

 

6. dialer group <number>

 

7. exit

 

8. dialer-list <dialer-group> protocol <protocol-name> {permit | deny | list <access-list-number> | access-group}>

 

9. ip access-list<access list number>permit <ip source address>

 

10. line <slot/wic/port>

 

11. script dialer <regexp>

 

12. exit

 

13. chat-script <script name> "" "ATDT*98*<profile number>#" TIMEOUT <timeout value> CONNECT

 

14. interface cellular <slot/wic/port>

 

So that should be it. Assuming the router is properly configured elsewhere, the traffic should begin to flow using the 3G interface and everything should be working just fine. Of course sometimes things dont work out quite so smoothly and I will publish a post soon detailing the steps needed to troubleshoot these types of connections when they dont work as planned.

 

I hope this summary is useful and would appreciate your comments using the form provided below.

Continue reading
659 Hits
0 Comments

Cisco Open SOC

So a couple of days ago Cisco, it would seem, have finally released their new open source security analytics framework: OpenSOC to the developer community. OpenSOC sits conceptually at the intersection between Big Data and Security Analytics

OpensocThe current totalizer on the Breach Level Index website (breachlevelindex.com) sits at almost 2.4 billion data records lost this year so far which works out approximately 6 million per day. The levels of this data loss will not be dropping anytime soon as attackers are only going to get better at getting their hands on this information. There is hope however as even the best hackers leave clues in their wake although finding these clues in enormous amounts of analytical data such as logs and telemetry can be the biggest of challenges.

This is where OpenSOC will seek to make the crucial difference and bridge the gap. Incorporating a platform of anomaly detection and incident forensics, it integrates elements of the Hadoop environment such as Kafka, Elasticsearch and Storm to deliver a scalable platform enabling full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search and telemetry aggregation. It will seek to provide security professionals the facility to detect and react to complex threats on a single converged platform.

The OpenSOC framework provides three key elements for security analytics:


    1. Context


      An extremely high speed mechanism to capture and store security data. OpenSOC consumes data by delivering it to multiple high speed processors capable of heavy lift contextual analytics in tandem with appropriate storage enabling subsequent forensic investigations.

 


    1. Real-time Processing


      Application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry providing for quick reaction investigations.

 


    1. Centralized Perspective


      The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.



When sensitive data is compromised, the company’s reputation, resources, and intellectual property is put at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:

    1. Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.

 

    1. Research external threat intelligence sources to uncover proactive warnings to potential attacks.

 

    1. Research a network forensics tool with full packet capture and historical records in order to determine context.



Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques. Security professionals can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.

Continue reading
81 Hits
0 Comments

Too Much Information - Hadoop and Big Data

hHadoop, a free, Java-based programming framework that makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes, supports the processing of large amounts of data in a distributed computing environment and is part of the Apache project sponsored by the Apache Software Foundation. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. This approach lowers the risk of catastrophic system failure, even if a significant number of nodes become inoperative.

Hadoop was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Doug Cutting, Hadoop's creator, named the framework after his child's stuffed toy elephant. The current Apache Hadoop ecosystem consists of the Hadoop kernel, MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper.

The Hadoop framework is used by major players including Google, Yahoo and IBM, largely for applications involving search engines and advertising. The preferred operating systems are Windows and Linux but Hadoop can also work with BSD and OS X.

The rapid proliferation of unstructured data is one of the driving forces of the new paradigm of big data analytics. According to one study, we are now producing as much data every 10 minutes as was created from the beginning of recorded time through the year 2003.1 The preponderance of data being created is of the unstructured variety -- up to about 90%, according to the IDC.

Big data is about being able to not just capture a wide variety of unstructured data, but to also capturing that data and combining it with other data to gain new insights that can be used in many ways to improve business performance. For Instance, in retail, it could mean delivering faster and better services to customers; in research, it could mean conducting tests over much wider sampling sizes; in healthcare, it could mean faster and more accurate diagnoses of illnesses.

The ways in which big data will change our lives is significant, and just beginning to reveal itself for those who are willing to capture, combine, and discover answers to their Big Questions. For big data to deliver on the promise of its vast potential, however, technology must be in place to enable organizations to capture and store massive amounts of unstructured data in its native format. That’s where Hadoop has become one of the enabling data processing technologies for big data analytics. Hadoop allows for dramatically bigger business questions to be answered, that we are already starting to see realized from large public cloud companies, which will shortly infiltrate into other IT oriented industries and services.

More than 50% of participating companies have begun implementing the available Hadoop frameworks as data hubs or auxiliary data repositories to their existing infrastructures, according to Intel’s 2013 IT Manager’s Survey on How Organizations are Using Big Data. In addition, 31% more organizations reported evaluating one of open-source Apache Hadoop framework.

So what are the key characteristics IT professionals should know about Hadoop in order to maximize its potential in managing unstructured data and advancing the cause of big data analytics? Here are five to keep in mind:

    1. Hadoop is economical. As an open-source software framework, Hadoop runs on standard servers. Hardware can be added or swapped in or out of a cluster, and operational costs are relatively low because the software is common across the infrastructure, requiring little tuning for each physical server.

 

    1. Hadoop provides an efficient framework for processing large sets of data. MapReduce is the software programming framework in the Hadoop stack. Simply put, rather than moving data across a network to be processed, MapReduce provides a framework to move the processing software to the data.3 In addition to simplifying the processing of big data sets, MapReduce also provides programmers with a common method of defining and orchestrating complex processing tasks across clusters of computers.

 

    1. Hadoop supports your existing database and analytics infrastructures, and does not displace it. Hadoop can handle data sets and tasks that can be a problem for legacy databases. In big data environments, you want to make sure that the underlying storage and infrastructure platform for the database is capable of handling the capacity and speed of big data initiatives, particularly for mission-critical applications. Because of this capacity it can and has been implemented as a replacement to existing infrastructures, but only where it fits the business need or advantage

 

    1. Hadoop will provide the best value where it is implemented with the right infrastructure. The Hadoop framework typically runs on mainstream standard servers using common Intel® server hardware. Newer servers with the latest Intel® computing, larger memory footprint, and more cache will typically provide better performance. In addition, Hadoop will perform better with faster in node storage, so systems should contain some amount of solid-state storage. In addition, the storage infrastructure should be optimized with the latest advances in automated tiering, deduplication, compression, encryption, erasure coding and thin provisioning. When Hadoop has scaled to encompass larger datasets it benefits from faster networks, so then 10Gb Ethernet rather than typical 1GbE bandwidth provides further benefit.

 

    1. Hadoop is supported by a large and active ecosystem. Big data is a big opportunity, not just for those using it to deliver competitive advantage, but also to those providing solutions. A large and active ecosystem has developed quickly around Hadoop, as it usually does around open-source solutions. As an example, Intel recently invested $740 million dollars into the leading distribution for Hadoop provided by Cloudera. Vendors are available to provide all or part of the Hadoop stack, including management software, third-party applications and a wide range of other tools to help simplify the deployment of Hadoop.



Unstructured data is growing nonstop across a variety of applications, in a wide range of formats. Those companies that are best able to harness it and use it for competitive advantage are seeing significant results and benefits. That’s why more than 80% of the companies surveyed by Intel are using, implementing or evaluating Hadoop.

Continue reading
65 Hits
0 Comments