Whispers & Screams
And Other Things

What on earth is making my home network so slow! (Part 1)

Let's face it, we've all been there. Sitting wondering why on earth a network connection that, up until 5 minutes ago had been working just fine was now all but useless. Less tech savvy individuals may just shrug their shoulders and try again later but anybody else is left wondering why. As a reader of this blog post that fact automatically places you in the latter category. So, to the problem. Could it be that somebody else in the house has started a large download? If that's the case its the easiest to solve just by asking around but the plethora of devices that are in our houses today make the job a lot more complex. For me it was a long forgotten mobile phone owned by my son, left on charge under the bed and set to auto update its code and apps that proved the final straw and drove me to come up with a solution to this problem.

Lets look at the problem in the round first of all. Homes nowadays usually have a router which connects off to the cable company or to the telephone line. These routers allow all of the devices in the house to connect to the net whether on the wireless or the wired side of life. Its not uncommon for a home network to support 10 to 20 devices not all of which will be known about by every other member of the household. Any one of these devices has the potential to bring the network to its knees for hours at an end by starting a large download. Of course the possibility also exists that somebody else on the outside has gained access to your network and it's important that this is not overlooked.

The first step in getting a handle on the situation will be to take control of your home router and secure it so that it cannot be manipulated by anybody else. Most home routers nowadays have a small, cut-down, webserver running on board which allows a management user to access the management web page. By using this web page clients can change all of the settings on the device. The page is usually accessible by both the wired and the wireless network. If you are using a Windows machine the easiest way to establish a connection to this page is to do the following:

    1. Click the pearl button and in the box which says "search programs and files" type cmd and press enter. This should bring up a window which looks like that shown on the right. Inside this window, type the command "ipconfig". The output should also resemble that shown on the right showing among other things, the address of the default gateway. Take a careful note of this address. (192.168.1.1 in this case)

 

    1. Open up a browser, type this default gateway address into the address bar and click enter. If your router is new or poorly configured you should now be looking at the control page for the device. If the device is configured properly you should now be looking at a login prompt page.

 

    1. Once logged in you will then be able to control the settings of the router.



This post is not written to be a guide for any specific router so I will keep any further instructions necessarily wide in scope.

The following bullets will link to posts that will be made available soon which examine the different aspects of this problem. Check back soon to see them when they become available.

    • Who is connected? Checking to understand which devices are connected to your router on WIFI and wired networks and establishing whether or not they should be.

 

    • What are they doing? Most routers show a basic table of transferred bandwidth as a part of their reporting. This can be used to examine the usage on your network and ascertain which devices are consuming most of the network.

 

    • Securing my router. As touched on previously, the router should be configured appropriately so that only those users whom you wish to have access are able to access both the network and the routers management page.

 

    • Customising the routers code. Home routers purchased off the shelf nowadays have woefully inadequate firmware that is frequently shown to be buggy at best and insecure at worst. Consider replacing this firmware with a fully customisable open source router such as dd-wrt or tomato.

 

    • Open source router management. (Wireshark and SNMP) Want to take the control of your home network to the max. Consider implementing network management, bandwidth management and device management.



I hope this post has proved informative as an intro to controlling your home network. Check back soon for further updates.

Continue reading
2506 Hits
2 Comments

Cisco Open SOC

So a couple of days ago Cisco, it would seem, have finally released their new open source security analytics framework: OpenSOC to the developer community. OpenSOC sits conceptually at the intersection between Big Data and Security Analytics

OpensocThe current totalizer on the Breach Level Index website (breachlevelindex.com) sits at almost 2.4 billion data records lost this year so far which works out approximately 6 million per day. The levels of this data loss will not be dropping anytime soon as attackers are only going to get better at getting their hands on this information. There is hope however as even the best hackers leave clues in their wake although finding these clues in enormous amounts of analytical data such as logs and telemetry can be the biggest of challenges.

This is where OpenSOC will seek to make the crucial difference and bridge the gap. Incorporating a platform of anomaly detection and incident forensics, it integrates elements of the Hadoop environment such as Kafka, Elasticsearch and Storm to deliver a scalable platform enabling full-packet capture indexing, storage, data enrichment, stream processing, batch processing, real-time search and telemetry aggregation. It will seek to provide security professionals the facility to detect and react to complex threats on a single converged platform.

The OpenSOC framework provides three key elements for security analytics:


    1. Context


      An extremely high speed mechanism to capture and store security data. OpenSOC consumes data by delivering it to multiple high speed processors capable of heavy lift contextual analytics in tandem with appropriate storage enabling subsequent forensic investigations.

 


    1. Real-time Processing


      Application of enrichments such as threat intelligence, geolocation, and DNS information to collected telemetry providing for quick reaction investigations.

 


    1. Centralized Perspective


      The interface presents alert summaries with threat intelligence and enrichment data specific to an alert on a single page. The advanced search capabilities and full packet-extraction tools are available for investigation without the need to pivot between multiple tools.



When sensitive data is compromised, the company’s reputation, resources, and intellectual property is put at risk. Quickly identifying and resolving the issue is critical, but, traditional approaches to security incident investigation can be time-consuming. An analyst may need to take the following steps:

    1. Review reports from a Security Incident and Event Manager (SIEM) and run batch queries on other telemetry sources for additional context.

 

    1. Research external threat intelligence sources to uncover proactive warnings to potential attacks.

 

    1. Research a network forensics tool with full packet capture and historical records in order to determine context.



Apart from having to access several tools and information sets, the act of searching and analyzing the amount of data collected can take minutes to hours using traditional techniques. Security professionals can use a single tool to navigate data with narrowed focus instead of wasting precious time trying to make sense of mountains of unstructured data.

Continue reading
1609 Hits
0 Comments

Lightweight Directory Access Protocol (LDAP)

wpid-d53372ab83ca060500bfdd46e1045836ldap2Sometimes traditional network engineers who arrive at the networking industry via the world of telecommunications can often find themselves unfamiliar with certain facets of the industry. Such facets can include network security and servers. A protocol which lies at the intersection between network security  and server technology is LDAP which stands for Lightweight Directory Access Protocol.

 

 

 

So what is LDAP and what is it used for? Lets take a look at the protocol in some detail.


 

Within the OSI model, LDAP sits at layer 7 and is, as such, an application layer protocol. LDAP is also an "Open" protocol which means that its standards are public information and it is not associated with or owned by any individual commercial organisation. Its primary purpose is to act as a protocol for accessing and maintaining distributed directory information services over an IP network having been specified to act seamlessly as part of a TCP/IP modeled network.


 



The most common usage for LDAP is to provide a mechanism for a "single sign on" across a distributed multi facility IT estate in order to minimise the authentication across multiple services. LDAP is based on a subset of the more heavily specified and older X500 protocol which was designed to be compatible with the more abstract OSI model.


 



When people talk about “LDAP”, they are really talking about the complex combination of business rules, software and data that allow you to log in and get access to secure resources.


 

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent (DSA), by default on TCP port and UDP port 389 and 636 for LDAPS. Global Catalog is available by default on ports 3268, and 3269 for LDAPS. The client then sends an operation request to the server, and the server sends responses in return. With some exceptions, the client does not need to wait for a response before sending the next request, and the server may send the responses in any order. All information is transmitted using Basic Encoding Rules (BER). These types of encodings are commonly called type-length-value or TLV encodings. The LDAP server hosts something called the directory-server database. As such, the LDAP protocol can be thought of loosely as a network enabled database query language.


 

The client may request the following operations:StartTLS — use the LDAPv3 Transport Layer Security (TLS) extension for a secure connection
Bind — authenticate and specify LDAP protocol version
Search — search for and/or retrieve directory entries
Compare — test if a named entry contains a given attribute value
Add a new entry
Delete an entry
Modify an entry
Modify Distinguished Name (DN) — move or rename an entry
Abandon — abort a previous request
Extended Operation — generic operation used to define other operations
Unbind — close the connection (not the inverse of Bind)

 

 

As was alluded to above, the directory-server database is indeed a database and, as a database, is structured in accordance with the rules of its own schema. The contents of the entries in an LDAP domain are governed by a directory schema, a set of definitions and constraints concerning the structure of the directory information tree (DIT).


 



The schema of a Directory Server defines a set of rules that govern the kinds of information that the server can hold. It has a number of elements, including:


 



Attribute Syntaxes—Provide information about the kind of information that can be stored in an attribute.
Matching Rules—Provide information about how to make comparisons against attribute values.
Matching Rule Uses—Indicate which attribute types may be used in conjunction with a particular matching rule.
Attribute Types—Define an object identifier (OID) and a set of names that may be used to refer to a given attribute, and associates that attribute with a syntax and set of matching rules.
Object Classes—Define named collections of attributes and classify them into sets of required and optional attributes.
Name Forms—Define rules for the set of attributes that should be included in the RDN for an entry.
Content Rules—Define additional constraints about the object classes and attributes that may be used in conjunction with an entry.
Structure Rule—Define rules that govern the kinds of subordinate entries that a given entry may have.
Attributes are the elements responsible for storing information in a directory, and the schema defines the rules for which attributes may be used in an entry, the kinds of values that those attributes may have, and how clients may interact with those values.


 

Clients may learn about the schema elements that the server supports by retrieving an appropriate subschema subentry.


 

The schema defines object classes. Each entry must have an objectClass attribute, containing named classes defined in the schema. The schema definition of the classes of an entry defines what kind of object the entry may represent - e.g. a person, organization or domain. The object class definitions also define the list of attributes that must contain values and the list of attributes which may contain values.


 

For example, an entry representing a person might belong to the classes "top" and "person". Membership in the "person" class would require the entry to contain the "sn" and "cn" attributes, and allow the entry also to contain "userPassword", "telephoneNumber", and other attributes. Since entries may have multiple ObjectClasses values, each entry has a complex of optional and mandatory attribute sets formed from the union of the object classes it represents. ObjectClasses can be inherited, and a single entry can have multiple ObjectClasses values that define the available and required attributes of the entry itself. A parallel to the schema of an objectClass is a class definition and an instance in Object-oriented programming, representing LDAP objectClass and LDAP entry, respectively.


 

Directory servers may publish the directory schema controlling an entry at a base DN given by the entry's subschemaSubentry operational attribute. (An operational attribute describes operation of the directory rather than user information and is only returned from a search when it is explicitly requested.)


 

Server administrators can add additional schema entries in addition to the provided schema elements. A schema for representing individual people within organizations is termed a white pages schema.


 

We will go on in subsequent posts to examine some of the concepts described here in more detail.
Continue reading
1266 Hits
0 Comments

Wi-Fi security luddite? The ICO is coming for you!

The Information Commissioner's Office today published new guidance for home Wi-Fi security after a YouGov report found that 40% of home users did not understand how to manage the security settings on their networks.

The survey also found that in spite of most ISPs now setting up and installing security on Wi-Fi equipment, 16% of the people surveyed were unsure whether or not they were using a secured network, or were aware they weren't, but didn't give a toss either way.

The new guidance includes information on managing encryption settings and how to think of a secure password. Top tip? Don't use pa55w0rd.

Giving people unsolicited access to your network could reduce connection speed, cause you to exceed data caps, or allow hordes of criminals to use your network for nefarious purposes, said the ICO.

Welcoming the move, D-Link's Chris Davies pointed out that there was no excuse for being caught out.

"There is no doubt that in the past setting up security on wireless networks could be tricky," said Chris. "But this is no longer the case with most wireless products.

"Security can be set up wiin a couple of minutes with no prior technical knowledge required. We've also been working with ISPs to help them ship products to consumers with security pre-configured."

Let's just hope the ICO doesn't start fining home users for data breaches. Or maybe that would be the kick in the butt some of them need?
Continue reading
1688 Hits
0 Comments

How to recognise security vulnerabilities in your IT systems

As IT systems continue to extend across multiple environments, IT security threats and vulnerabilities have likewise continued to evolve.

Whether from the growing insider threat of rogue and unauthorised internal sources, or from the ever increasing number of external attacks, organisations are more susceptible than ever to crippling attacks. It's almost become simply a matter of "when it will happen" rather than "if it will happen."

For IT resellers, security issues have always persisted as critical to all communications for an organisation's IT department.

However, with the increase in the levels of access to a company's network compounded by these maturing threats, it is no longer feasible to merely recognise the existence of more simplistic, perimeter threats.

Resellers must be able to provide customers with a comprehensive risk assessment of the entirety of an organisation's IT assets to their vulnerabilities--inclusive of both software and hardware.

This risk assessment must incorporate an understanding of external threats and internal vulnerabilities and how the two continue to merge to create increasingly susceptible IT environments.

At the most basic level, organisations and resellers alike must understand the different types of threats. Malware, a generic term for malicious software, such as trojan horses, worms, and viruses, is the most common form of attack that is originated by an external hacker. Malware attacks have persisted for years - from the infamous Morris worm to common spyware attacks - and they remain the easiest and most damaging tactic deployed by malicious hackers.

With enterprises extending to the cloud, and more organisations adopting SaaS-based applications, social media and other Web 2.0 tools, damaging malware attacks and viruses can now originate through simple SPAM messages and emails.

Internally, organisations are typically susceptible to threats from either authorised rogue users who abuse privileged accounts and identities to access sensitive information, or unauthorised users who use their knowledge of administrative credentials to subvert security systems. It is this type of vulnerability - unauthorised internal access - that has continued to emerge as the most volatile and disruptive.

To truly understand the risks involved with these "insider threats", organisations and resellers need to understand the root of the vulnerabilities.

Most commonly, the risks lie with the use of embedded credentials, most notably hard coded passwords, a practice employed by software developers to provide access to administrators during the development process. The practice occurs frequently since application developers tend to be more focused on the development and release cycle of the application, rather than any security concerns. While it may appear harmless at first glance, it is extremely risky as it can potentially provide unauthorised users with powerful, complete access to IT systems.

To compound the matter, by hardcoding passwords to cover embedded credentials, vendors create a problem that cannot be easily fixed nor assuaged by tools such as Privileged Identity Management systems. Once embedded into an application, the passwords cannot be removed without damaging the system. At the end of the day, the passwords provide malicious outsiders with a bulls eye target - a key vulnerability to leverage to help them gain powerful access and control on a target device, and potentially throughout the entire organisation.

One of the most well known examples is the Stuxnet virus. We've all been blown away by the design of Stuxnet, and were surprised by the pathway the virus took in targeting SCADA systems. Reflection shows that the virus used the hard coded password vulnerability to target these systems - which should serve as a lesson for all businesses.

The existence of vulnerabilities embedded within these types of systems is not necessarily new, but the emergence of new threats continues to shed light on the ease with which they can be leveraged for an attack. While malicious outsiders and insiders have focused often on the administrative credentials on typical systems like servers, databases and the like, in reality, IT organisations need to identify every asset that has a microprocessor, memory or an application/process. From copiers to scanners, these devices all have similar embedded credentials that represent significant organisational vulnerabilities.

While steps can be taken to proactively manage embedded credentials without hardcoding them in the first place - Privileged Identity Management tools can help - the onus is on the organisation, and the reseller, to ensure that a holistic view of all vulnerabilities and risks has been taken.
Continue reading
1352 Hits
0 Comments