Whispers & Screams
And Other Things
Vastly experienced, versatile senior technical asset with a broad range of highly evolved skills from team building to high-level technology solution implementations. A courageous and tenacious leader with proven experience in business development, organisational visioning, cutting edge information technology deployments, and as a senior management liaison. Experienced at working at all levels from Start-up to Corporate, I thrive on change and take the lead to engage and drive the engineering landscape in any business An outgoing personality, with high energy levels who is customer focused but understands the need for a structured approach to business. A mature and collaborative style provides excellent communication and presentation skills and, drawing on past experience, gives the credibility to build trust. A strategic thinker, who is innovative and creative and makes technically 'savvy' decisions and encourages others to do so, whilst totally focused on success and how this drives results.

Configuring 3G Wireless WAN on Modular and Fixed ISRs (HWIC-3G-GSM, HWIC-3G-HSPA, PCEX-3G-HSPA-x)

Cisco Integrated Services Routers are branch routers which support the new paradigms of network traffic delivery in the cloud and on the move. They provide Internet connectivity to teleworkers, and minor sites supporting less than 20 users. They also support bridging and routing between the LAN and the WAN whilst providing many advanced features such as antivirus protection.

 

The Third Generation (3G) Wireless High-Speed WAN Interface Card (HWIC) is a multiband, multiservice WAN card for use over WCDMA Radio Access Networks (RAN).

 

Both the fixed and the modular 3G routers can be used as the primary WAN connectivity and as a backup for critical applications which require a fallback service. 3G WAN is supported on the following modular Cisco ISRs: 800, 1841, 1861, 2800 series, 3800 series, 1900, 2900 and 3900.

 

One of the first actions required will be to configure a new 3G HWIC data profile.

 

To configure your 3G HWIC data profile, you will need the following information from your service provider:

 

Username (if required by your carrier)

 

Password (if required by your carrier)

 

Access Point Name (APN)

 

Once obtained, we can begin to set up the 3G features on the equipment itself by following these procedures:




    1. Data Account Provisioning

 

    1. Data Call Setup

 

    1. Voice Initiated Data Callback or Remote Dial-in (Optional)



In order to provision our data account we must have first obtained the key information from the service provider. The next priority is to ensure that we have the necessary service availability and signal strength in order for the connection to work. We need to use the following commands to examine the services available on the 3G network at the location in question.

    1. show cellular network - This displays info about the carrier network.

 

    1. show cellular radio - This shows the signal strength. We are looking for RSSI of -90dBm for a steady and reliable connection.

 

    1. show cellular security - This shows SIM lock status and modem lock status.



Once we have determined that the conditions are favourable we can go ahead and set up a modem data profile. To examine the existing data profiles configured on the equipment use the command show cellular profile. 

Assuming the profile we need is not already created we will need to go ahead and create it. In order to do this we use the command cellular gsm profile create . The syntax required is as follows:

cellular <slot/wic/port> gsm profile create <profile number> <apn> <authentication> <username> <password>

for example

cellular 0/0/0 gsm profile create 1 vodafone.apn chap 3guser 3guserpass

The data profile parameters are as follows:

    • apn - Access Point Name - This must be obtained from the service provider

 

    • authentication - Usually chap or pap

 

    • username - provided by service provider

 

    • password - provided by service provider



Once the data profile is properly set we then look to set up the parameters for the correct operation of the data call.

Firstly it is necessary to configure the cellular interface. The steps in summary are as follows:

1. configure terminal

 

2. interface cellular <slot/wic/port>

 

3. encapsulation ppp

 

4. ppp chap hostname <host>

 

5. ppp chap password 0 <password>

 

6. asynchronous mode interactive

 

7. ip address negotiated

 

The authentication parameters used here must be the same as those configured under the earlier GSM profile.

 

Once this is configured we need only configure the dialer and the steps for doing this in summary are as follows:

 

1. configure terminal

 

2. interface cellular <slot/wic/port>

 

3. dialer in-band

 

4. dialer idle-timeout <seconds>

 

5. dialer string <string>

 

6. dialer group <number>

 

7. exit

 

8. dialer-list <dialer-group> protocol <protocol-name> {permit | deny | list <access-list-number> | access-group}>

 

9. ip access-list<access list number>permit <ip source address>

 

10. line <slot/wic/port>

 

11. script dialer <regexp>

 

12. exit

 

13. chat-script <script name> "" "ATDT*98*<profile number>#" TIMEOUT <timeout value> CONNECT

 

14. interface cellular <slot/wic/port>

 

So that should be it. Assuming the router is properly configured elsewhere, the traffic should begin to flow using the 3G interface and everything should be working just fine. Of course sometimes things dont work out quite so smoothly and I will publish a post soon detailing the steps needed to troubleshoot these types of connections when they dont work as planned.

 

I hope this summary is useful and would appreciate your comments using the form provided below.

Continue reading
1095 Hits
0 Comments

Cisco Open SOC

So a couple of days ago Cisco, it would seem, have finally released their new open source security analytics framework: OpenSOC to the developer community. OpenSOC sits conceptually at the intersection between Big Data and Security Analytics

OpensocThe current totalizer on the Breach Level Index website (breachlevelindex.com) sits at almost 2.4 billion data records lost this year so far which works out approximately 6 million per day. The levels of this data loss will not be dropping anytime soon as attackers are only going to get better at getting their hands on this information. There is hope however as even the best hackers leave clues in their wake although finding these clues in enormous amounts of analytical data such as logs and telemetry can be the biggest of challenges.

This is where OpenSOC will seek to make the crucial difference and bridge the gap. Incorporating a platform of anomaly detection and incident forensics, it integrates elements of the Hadoop environment such as Kafka, Elasticsearch and Storm to deliver a scalable platform enabling full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search and telemetry aggregation. It will seek to provide security professionals the facility to detect and react to complex threats on a single converged platform.

The OpenSOC framework provides three key elements for security analytics:


    1. Context


      An extremely high speed mechanism to capture and store security data. OpenSOC consumes data by delivering it to multiple high speed processors capable of heavy lift contextual analytics in tandem with appropriate storage enabling subsequent forensic investigations.

 


    1. Real-time Processing


      Application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry providing for quick reaction investigations.

 


    1. Centralized Perspective


      The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.



When sensitive data is compromised, the company’s reputation, resources, and intellectual property is put at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:

    1. Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.

 

    1. Research external threat intelligence sources to uncover proactive warnings to potential attacks.

 

    1. Research a network forensics tool with full packet capture and historical records in order to determine context.



Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques. Security professionals can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.

Continue reading
511 Hits
0 Comments

Too Much Information - Hadoop and Big Data

hHadoop, a free, Java-based programming framework that makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes, supports the processing of large amounts of data in a distributed computing environment and is part of the Apache project sponsored by the Apache Software Foundation. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. This approach lowers the risk of catastrophic system failure, even if a significant number of nodes become inoperative.

Hadoop was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Doug Cutting, Hadoop's creator, named the framework after his child's stuffed toy elephant. The current Apache Hadoop ecosystem consists of the Hadoop kernel, MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper.

The Hadoop framework is used by major players including Google, Yahoo and IBM, largely for applications involving search engines and advertising. The preferred operating systems are Windows and Linux but Hadoop can also work with BSD and OS X.

The rapid proliferation of unstructured data is one of the driving forces of the new paradigm of big data analytics. According to one study, we are now producing as much data every 10 minutes as was created from the beginning of recorded time through the year 2003.1 The preponderance of data being created is of the unstructured variety -- up to about 90%, according to the IDC.

Big data is about being able to not just capture a wide variety of unstructured data, but to also capturing that data and combining it with other data to gain new insights that can be used in many ways to improve business performance. For Instance, in retail, it could mean delivering faster and better services to customers; in research, it could mean conducting tests over much wider sampling sizes; in healthcare, it could mean faster and more accurate diagnoses of illnesses.

The ways in which big data will change our lives is significant, and just beginning to reveal itself for those who are willing to capture, combine, and discover answers to their Big Questions. For big data to deliver on the promise of its vast potential, however, technology must be in place to enable organizations to capture and store massive amounts of unstructured data in its native format. That’s where Hadoop has become one of the enabling data processing technologies for big data analytics. Hadoop allows for dramatically bigger business questions to be answered, that we are already starting to see realized from large public cloud companies, which will shortly infiltrate into other IT oriented industries and services.

More than 50% of participating companies have begun implementing the available Hadoop frameworks as data hubs or auxiliary data repositories to their existing infrastructures, according to Intel’s 2013 IT Manager’s Survey on How Organizations are Using Big Data. In addition, 31% more organizations reported evaluating one of open-source Apache Hadoop framework.

So what are the key characteristics IT professionals should know about Hadoop in order to maximize its potential in managing unstructured data and advancing the cause of big data analytics? Here are five to keep in mind:

    1. Hadoop is economical. As an open-source software framework, Hadoop runs on standard servers. Hardware can be added or swapped in or out of a cluster, and operational costs are relatively low because the software is common across the infrastructure, requiring little tuning for each physical server.

 

    1. Hadoop provides an efficient framework for processing large sets of data. MapReduce is the software programming framework in the Hadoop stack. Simply put, rather than moving data across a network to be processed, MapReduce provides a framework to move the processing software to the data.3 In addition to simplifying the processing of big data sets, MapReduce also provides programmers with a common method of defining and orchestrating complex processing tasks across clusters of computers.

 

    1. Hadoop supports your existing database and analytics infrastructures, and does not displace it. Hadoop can handle data sets and tasks that can be a problem for legacy databases. In big data environments, you want to make sure that the underlying storage and infrastructure platform for the database is capable of handling the capacity and speed of big data initiatives, particularly for mission-critical applications. Because of this capacity it can and has been implemented as a replacement to existing infrastructures, but only where it fits the business need or advantage

 

    1. Hadoop will provide the best value where it is implemented with the right infrastructure. The Hadoop framework typically runs on mainstream standard servers using common Intel® server hardware. Newer servers with the latest Intel® computing, larger memory footprint, and more cache will typically provide better performance. In addition, Hadoop will perform better with faster in node storage, so systems should contain some amount of solid-state storage. In addition, the storage infrastructure should be optimized with the latest advances in automated tiering, deduplication, compression, encryption, erasure coding and thin provisioning. When Hadoop has scaled to encompass larger datasets it benefits from faster networks, so then 10Gb Ethernet rather than typical 1GbE bandwidth provides further benefit.

 

    1. Hadoop is supported by a large and active ecosystem. Big data is a big opportunity, not just for those using it to deliver competitive advantage, but also to those providing solutions. A large and active ecosystem has developed quickly around Hadoop, as it usually does around open-source solutions. As an example, Intel recently invested $740 million dollars into the leading distribution for Hadoop provided by Cloudera. Vendors are available to provide all or part of the Hadoop stack, including management software, third-party applications and a wide range of other tools to help simplify the deployment of Hadoop.



Unstructured data is growing nonstop across a variety of applications, in a wide range of formats. Those companies that are best able to harness it and use it for competitive advantage are seeing significant results and benefits. That’s why more than 80% of the companies surveyed by Intel are using, implementing or evaluating Hadoop.

Continue reading
463 Hits
0 Comments

Cisco Banners

A banner is a useful tool for sending a security message to selected visitors to the equipment. Cisco equipment uses four different banner types to provide different messages at different times and these types are exec process creation banner, incoming terminal line banner, login banner and message of the day banner.

Of these four types, message of the day is the most extensively used banner. Is message is seen by anybody connecting to the router whether they connect via Telnet, Aux port or Console port.

Screenshot_1

 

The image above shows the available types on the command line.

The most frequently seen type of banner is the Message of the day (MOTD) as mentioned above. When configuring this type of banner the following prompt is seen:

=================================================================================================

 

Router(config)#banner motd ?

 

LINE  c banner-text c, where 'c' is a delimiting character

 

Router(config)#banner motd #

 

Enter TEXT message.  End with the character '#'.

 

If you are not authorised to be using this router you must disconnect immediately.

 

#

 

Router(config)#^z

 

Router#

 

20:25:12: %SYS-5-CONFIG_I: Configured from console by console

 

Router#exit

 

Router con0 is now available

 

Press enter to get started.

 

If you are not authorised to be using this router you must disconnect immediately.

 

Router>


====================================================================================================

The most important part to understand is the delimiting character—this is the element that’s used to tell the router when the message is complete. Any character can be used as a delimiting character, but you can’t use the delimiting character in the message itself. Also, once the message is complete, press Enter, then the delimiting character, and then Enter again.

Below are some details of the other banners discussed:
Exec banner You can configure a line-activation (exec) banner to be displayed when an EXEC process (such as a line activation or incoming connection to a VTY line) is created. By
simply starting a user exec session through a console port, you’ll activate the exec banner.
Incoming banner You can configure a banner to be displayed on terminals connected to reverse Telnet lines. This banner is useful for providing instructions to users who use reverse Telnet.
Login banner You can configure a login banner to be displayed on all connected terminals. This banner is displayed after the MOTD banner but before the login prompts. The login banner can’t be disabled on a per-line basis, so to globally disable it, you’ve got to delete it with the no banner login command.

Here is an example of a login banner:
!
banner login ^C
-----------------------------------------------------------------
Cisco Router and Security Device Manager (SDM) is installed on this device.
This feature requires the one-time use of the username "cisco"
with the password "cisco". The default username and password have a privilege
level of 15.
Please change these publicly known initial credentials using SDM or the IOS
CLI.
Here are the Cisco IOS commands.
username <myuser> privilege 15 secret 0 <mypassword>
no username cisco
Replace <myuser> and <mypassword> with the username and password you want to
use.
For more information about SDM please follow the instructions in the QUICK
START GUIDE for your router or go to http://www.cisco.com/go/sdm

-----------------------------------------------------------------
^C
!
The above login banner should look pretty familiar—it’s the banner that Cisco has in its default configuration for its ISR routers. Again, this banner is displayed before the login
prompts but after the MOTD banner.

Continue reading
572 Hits
0 Comments

Network Functions Virtualization on the Software-Defined Network

banner_inter_urbanIn the modern Telecom industry, driven by the fast changing demands that a connected society makes of it, a huge number of new applications are emerging such as IPX, eHealth, Smart Cities and the Internet of Things. Each of these emergent applications requires new customisations of the ecosystem to manage traffic through a wide variety of service providers.

This is the core challenge faced by todays infrastructure, but we must also not overlook the fact that to serve this larger ecosystem requires an enormous change to OSS infrastructure and the way networks are being managed. Service providers are placed in the awkward space between the end users and the emergent technologies but it is the fact that these technologies and their business models are often emerging on a month to month basis that presents the greatest challenge.

If we consider all the IT assets ISP's and Telcos have at their Points of Presence it represents a significant and very much underused resource. The holy grail for many of these organisation is to be able to unlock all of this storage and computing capacity, and turn it into a virtualized resources. This strategy opens up some intriguing possibilities such as bringing remote resource to bear during times of heavy compute load at a specific locale from areas where capacity is less constrained. In infrastructure terms, this cloud-oriented world of adding new network capacity whenever and wherever it is needed is a matter of merely sliding more cards into racks or deploying new software which greatly lowers the cost of scaling the network hardware by commoditising the components used to build up a service providers infrastructure.

Agility of services is the key to this new world order where services can be created orders of magnitude more quickly than was traditionally the case. In this new model the division between content providers and service providers becomes blurred. The flexibility to manage this dynamism is the key to the industry being able to meet the demands that the connected society will increasingly place on it and it will be those players who are able to manage this balancing act most effectively that will come out on top.

This is where NFV comes in. The advent of Network Function Virtualization, or NFV, has strong parallels to those developments in the computing world that gave us the cloud, big data and other commodity computing advances. Using capacity where and when it is required with a lot less visibility into the physical location of the network than is needed currently presents a whole new set of unique challenges. As computing hardware has developed and become more capable, a greater level of software complexity has taken place by its side.

The management of NFV will be critical to its operation, and the way that end user functionality is moving to the cloud today represents a sneak preview of this. We’re seeing a preview of that as computing scales to the cloud. A lot of careful design consideration will be required and service providers need to begin adapting their infrastructure today to accommodate this future virtualization.

Closely related and indeed an enabler to the trend of NFV is the Software-Defined Network, or SDN. The SDN, or Software Defined Networking can provide improved network efficiency and better cost savings, allowing the network to follow-the-sun, turning down servers or network hardware when the load lightens, or even turning them off at night. In a wireless environment, for example, if you could turn off all the excess network capability not in use from 10 p.m. to 6 a.m., you will see a significant decrease in the cost of electricity and cooling.

The continued integration of technologies such as Openflow into the latest and greatest network management implementations will further enable this trend as we increasingly see these OSS and BSS systems seek to pre-empt their traditional reactive mechanisms by looking farther up the business model in order to steal vital time with which to maximise the effectiveness of their influence and ultimately maximise the value add of their managed virtualised domains.

Continue reading
453 Hits
0 Comments