Whispers & Screams
And Other Things

Cisco Open SOC

So a couple of days ago Cisco, it would seem, have finally released their new open source security analytics framework: OpenSOC to the developer community. OpenSOC sits conceptually at the intersection between Big Data and Security Analytics

OpensocThe current totalizer on the Breach Level Index website (breachlevelindex.com) sits at almost 2.4 billion data records lost this year so far which works out approximately 6 million per day. The levels of this data loss will not be dropping anytime soon as attackers are only going to get better at getting their hands on this information. There is hope however as even the best hackers leave clues in their wake although finding these clues in enormous amounts of analytical data such as logs and telemetry can be the biggest of challenges.

This is where OpenSOC will seek to make the crucial difference and bridge the gap. Incorporating a platform of anomaly detection and incident forensics, it integrates elements of the Hadoop environment such as Kafka, Elasticsearch and Storm to deliver a scalable platform enabling full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search and telemetry aggregation. It will seek to provide security professionals the facility to detect and react to complex threats on a single converged platform.

The OpenSOC framework provides three key elements for security analytics:


    1. Context


      An extremely high speed mechanism to capture and store security data. OpenSOC consumes data by delivering it to multiple high speed processors capable of heavy lift contextual analytics in tandem with appropriate storage enabling subsequent forensic investigations.

 


    1. Real-time Processing


      Application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry providing for quick reaction investigations.

 


    1. Centralized Perspective


      The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.



When sensitive data is compromised, the company’s reputation, resources, and intellectual property is put at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:

    1. Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.

 

    1. Research external threat intelligence sources to uncover proactive warnings to potential attacks.

 

    1. Research a network forensics tool with full packet capture and historical records in order to determine context.



Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques. Security professionals can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.

Continue reading
1652 Hits
0 Comments

Too Much Information - Hadoop and Big Data

hHadoop, a free, Java-based programming framework that makes it possible to run applications on systems with thousands of nodes involving thousands of terabytes, supports the processing of large amounts of data in a distributed computing environment and is part of the Apache project sponsored by the Apache Software Foundation. Its distributed file system facilitates rapid data transfer rates among nodes and allows the system to continue operating uninterrupted in case of a node failure. This approach lowers the risk of catastrophic system failure, even if a significant number of nodes become inoperative.

Hadoop was inspired by Google's MapReduce, a software framework in which an application is broken down into numerous small parts. Any of these parts (also called fragments or blocks) can be run on any node in the cluster. Doug Cutting, Hadoop's creator, named the framework after his child's stuffed toy elephant. The current Apache Hadoop ecosystem consists of the Hadoop kernel, MapReduce, the Hadoop distributed file system (HDFS) and a number of related projects such as Apache Hive, HBase and Zookeeper.

The Hadoop framework is used by major players including Google, Yahoo and IBM, largely for applications involving search engines and advertising. The preferred operating systems are Windows and Linux but Hadoop can also work with BSD and OS X.

The rapid proliferation of unstructured data is one of the driving forces of the new paradigm of big data analytics. According to one study, we are now producing as much data every 10 minutes as was created from the beginning of recorded time through the year 2003.1 The preponderance of data being created is of the unstructured variety -- up to about 90%, according to the IDC.

Big data is about being able to not just capture a wide variety of unstructured data, but to also capturing that data and combining it with other data to gain new insights that can be used in many ways to improve business performance. For Instance, in retail, it could mean delivering faster and better services to customers; in research, it could mean conducting tests over much wider sampling sizes; in healthcare, it could mean faster and more accurate diagnoses of illnesses.

The ways in which big data will change our lives is significant, and just beginning to reveal itself for those who are willing to capture, combine, and discover answers to their Big Questions. For big data to deliver on the promise of its vast potential, however, technology must be in place to enable organizations to capture and store massive amounts of unstructured data in its native format. That’s where Hadoop has become one of the enabling data processing technologies for big data analytics. Hadoop allows for dramatically bigger business questions to be answered, that we are already starting to see realized from large public cloud companies, which will shortly infiltrate into other IT oriented industries and services.

More than 50% of participating companies have begun implementing the available Hadoop frameworks as data hubs or auxiliary data repositories to their existing infrastructures, according to Intel’s 2013 IT Manager’s Survey on How Organizations are Using Big Data. In addition, 31% more organizations reported evaluating one of open-source Apache Hadoop framework.

So what are the key characteristics IT professionals should know about Hadoop in order to maximize its potential in managing unstructured data and advancing the cause of big data analytics? Here are five to keep in mind:

    1. Hadoop is economical. As an open-source software framework, Hadoop runs on standard servers. Hardware can be added or swapped in or out of a cluster, and operational costs are relatively low because the software is common across the infrastructure, requiring little tuning for each physical server.

 

    1. Hadoop provides an efficient framework for processing large sets of data. MapReduce is the software programming framework in the Hadoop stack. Simply put, rather than moving data across a network to be processed, MapReduce provides a framework to move the processing software to the data.3 In addition to simplifying the processing of big data sets, MapReduce also provides programmers with a common method of defining and orchestrating complex processing tasks across clusters of computers.

 

    1. Hadoop supports your existing database and analytics infrastructures, and does not displace it. Hadoop can handle data sets and tasks that can be a problem for legacy databases. In big data environments, you want to make sure that the underlying storage and infrastructure platform for the database is capable of handling the capacity and speed of big data initiatives, particularly for mission-critical applications. Because of this capacity it can and has been implemented as a replacement to existing infrastructures, but only where it fits the business need or advantage

 

    1. Hadoop will provide the best value where it is implemented with the right infrastructure. The Hadoop framework typically runs on mainstream standard servers using common Intel® server hardware. Newer servers with the latest Intel® computing, larger memory footprint, and more cache will typically provide better performance. In addition, Hadoop will perform better with faster in node storage, so systems should contain some amount of solid-state storage. In addition, the storage infrastructure should be optimized with the latest advances in automated tiering, deduplication, compression, encryption, erasure coding and thin provisioning. When Hadoop has scaled to encompass larger datasets it benefits from faster networks, so then 10Gb Ethernet rather than typical 1GbE bandwidth provides further benefit.

 

    1. Hadoop is supported by a large and active ecosystem. Big data is a big opportunity, not just for those using it to deliver competitive advantage, but also to those providing solutions. A large and active ecosystem has developed quickly around Hadoop, as it usually does around open-source solutions. As an example, Intel recently invested $740 million dollars into the leading distribution for Hadoop provided by Cloudera. Vendors are available to provide all or part of the Hadoop stack, including management software, third-party applications and a wide range of other tools to help simplify the deployment of Hadoop.



Unstructured data is growing nonstop across a variety of applications, in a wide range of formats. Those companies that are best able to harness it and use it for competitive advantage are seeing significant results and benefits. That’s why more than 80% of the companies surveyed by Intel are using, implementing or evaluating Hadoop.

Continue reading
1612 Hits
0 Comments