PCI DSS 3.0 – What’s new?

As most Information Security professionals are aware, the Payment Card Industry (PCI) has released in late 2013 version 3.0 of its Data Security Standards (DSS). As done already in the past, the PCI council has released a document with a summary of changes highlighting the differences between version 2.0 and 3.0 of the standard.

THIS ARTICLE highlights the 5 most important changes for merchants introduced by version 3.0 of the PCI Data Security Standard (DSS):

PCI DSS 5 changes

 

The majority of the changes in PCI DSS version 3.0 are clarification-related and supplemental, not (for the most part) new requirements. The underlying reason for the changes are the new ways in how technology is used, both by merchants and by attackers.  In general, PCI 3.0 provides better guidance to Qualified Security Assessors (QSAs) about what to assess and what evidence is needed to confirm that a control is in place.

This article groups into 5 categories the updates to the standard that are likely to present the biggest roadblocks for the largest segment of the merchant population.

 

1. Penetration testing

Perhaps the most visible change to the existing requirements has to do with updates to penetration testing requirements (11.3), including the requirement (11.3.4) to verify methods used to segment the cardholder data environment (CDE) from other areas. One portion of the update that’s likely to be particularly challenging for merchants to meet is the requirement that penetration testing activities (internal and external) must now follow an “industry-accepted penetration testing methodology,” such as the specifically referenced NIST SP 800-115, Technical Guide to Information Security Testing and Assessment. This is however good news for those specialized Penetration test service providers who can prove to adhere to industry-accepted penetration testing methodologies: those will likely experience boost in their business because merchants must now be careful to select their service providers only among those.

 

 2. Inventorying system components

Another area of potentially huge impact from a practical standpoint relates to the new requirement (2.4) to “maintain an inventory of system components that are in scope for PCI DSS. The testing procedure for this specifically requires that assessors to verify that a list of hardware and software components is maintained and includes a description of function/use for each. As we all experienced, keeping inventories current isn’t easy to do mostly because they change often and, worse,  frequently require manual effort in order to keep them reflective of the environment as it actually exists. This complexity is compounded when virtualization is thrown into the mix (because a system component includes virtual images too) or when the environment sprawls out in multiple geographic locations, as most distributed retail locations are likely to do. Furthermore, complexity also arises when proprietary, vendor-supplied systems are maintained by outside personnel (for example, application vendors or system integrators).

 

 3. Vendor Relationships

Requirements 12.8.5 and 12.9 now call for explicit documentation about which PCI DSS requirements are managed by vendors vs. which are managed by the organization itself. The requirement for documentation means that now it’s necessary not only to maintain a list of the vendors and to track their compliance status when that service intersects your Cardholder Data Environment (CDE), both requirements before DSS 3.0, but also to maintain a matrix of the PCI DSS requirements with a corresponding responsibility assignment matrix for each applicable vendor that the vendor must sign off on.
In practice, merchants must now know exactly what the vendor or service provider does (to determine what its scope is), where responsibility should lie for controls, and how to create a document that describes those things. Then comes the fun part: getting the service providers in question to agree and to enter into a formal, written agreement about it. As anybody who’s been involved in vendor negotiations in the past can tell you, negotiating these points (particularly after a contract with a service provider is already in place) will be time-consuming and may be (depending on the service provider) contentious.

Also to be noted: PCI 3.0 mandates that service providers with remote access to Cardholder Data Environments (CDEs) must use a unique authentication credential for each customer environment — a requirement that will undoubtedly enhance security. It is not unusual for QSAs to find service providers using a common authentication credential to manage multiple client environments. From now on, service providers are expected to implement new customer and password management processes, and merchants should obtain confirmation from their service providers that this control is being met.

 

 4. Anti-Malware

Requirement 5.1.2 now requires merchants to: “identify and evaluate evolving malware threats” for “systems considered to be not commonly affected by malicious software”. That means that if you use a system that isn’t usually affected by malware (think mainframes or Unix servers), you’ll need to have a process to make sure that this continues to be the case and that, should some malware emerge for those platforms, you’ll know about it. Depending on the organization, these requirements can have some impact. Under PCI DSS 2.0, the standard specified only that there be antivirus software in place, that it be operational, that it be updated or current, and that it have the ability to generate logs; now, the anti-malware system must be capable of locking out the user from disabling it (this will probably require specific configuration) and be configured to make use of that capability.

 

5. Physical access and point of sale

Requirement 9.3 now requires that merchants control physical access for on-site personnel: that access must be authorized, based on individual job function and must be revoked immediately upon termination. Also, requirement 9.9 now requires that merchants “protect devices that capture payment card data … from tampering and substitution”. How many merchants right now have an inventory of their PoS devices? While it’s certainly a good practice, the reality is that surprisingly few do.  And consider where Req. 9.9  is most likely to be applicable from a merchant standpoint: retail locations, restaurants, doctors’ offices, food trucks, taxi cabs and other unique retail environments. Are those retailers accustomed to “periodically inspecting” point-of-sale (PoS) devices — for example, checking the serial numbers to ensure that devices haven’t been swapped? Not likely.

To meet these new requirements, many organizations will need to develop and implement new PoS security processes, such as maintaining up-to-date inventories, performing periodic PoS inspections and providing employee training about PoS security. QSAs will expect all such processes to be thoroughly documented and regularly performed.

 

If you want to read further on PCI DSS 3.0 I suggest a look to the following articles:

PCI 3.0: New requirements cover pen testing, service providers

PCI QSA analysis: PCI DSS 3.0 to bring new PCI challenges, benefits

PCI DSS 3.0 preview highlights passwords, providers, payment data flow

 

 

DISCUSSION QUESTION:

Which requirement in PCI DSS 3.0 do you think will be most difficult to meet, and why?

Identity and Access Management (IAM) in the cloud: Challenges galore

The topic of today’s post is Identity and Access Management (IAM) in the Cloud. 

For those who are not familiar with the term, an Identity Management Access (IAM) system is a framework for business processes that facilitates the management of electronic identities. The framework includes the technology needed to support identity management. IAM technology can be used to initiate, capture, record and manage user identities and their related access permissions in an automated fashion. This ensures that access privileges are granted according to one interpretation of policy and all individuals and services are properly authenticated, authorized and audited.

In the not-so-distant past, Identity and Access Management was another entry on the long list of functions handled internally with on-premise technology. But the world of identity management no longer exists only behind the firewall, and the adoption of cloud applications and services has led some enterprises to consider IAM as a service, or cloud-based IAM, as an answer to their security challenges. In other words, some companies are opting to purchase an Identity Management system as a service from an external provider. Some of the identity management system contenders in the market include names such as Microsoft,IBM, Oracle, Symplified, Identity Automation, Verizon and Mycroft.

Organizations that have already taken the leap by putting sensitive legacy applications in the cloud are more open to the idea of having their IAM infrastructure in the cloud from a security perspective. Those firms definitely want the benefits of a cloud-based service from a run-time operational expense perspective and also like the idea of having pre-built integration between the cloud-based infrastructure and their SaaS environment.

There are a number of factors companies should consider before moving to a cloud-based identity management service. Step one, numerous sources said, is for organizations to understand the requirements of their environment.

Step one is for organizations to understand the requirements of their environment. This appears quite an obvious step and you have to make sure you start from here because a lot of the decision making process is driven by these considerations.

If you have a homegrown application or something that is customized, like a SAP environment, you may want an on-premise solution or a private service solution that can be customized and configured specifically for your business and its application.

If you’re supporting cloud applications, standard services or particularly something that involves external users, there could be good benefit from cloud IAM because you’re managing the identities of a community. You leverage the expertise of the cloud IAM vendor and take advantage of economies of scale.

Other essential considerations include understanding service-level agreements, how many identities are being managed and whether the cloud service will be used as an adjunct to in-house identity management, which would require synchronizing sources.

Step two includes Security considerations: the security of placing directory data in the cloud is a very big factor for some companies and it should be considered by just any organization. It must be noted that the Cloud provider might be able to provide a much higher level of Security than the in-house operations team managing an Active Directory server, an LDAP server or other alternatives, so the decision to move IAM on the Cloud might be consistent with an objecting of improving security and not just obtaining better management, agility, lower costs and operational improvements at the expense of Security.

If service providers provide better separation of duties, monitoring and reporting on admin activities, that may even encourage those organizations seeking to improve their security posture.  Some enterprises will be willing to pay a premium for more security, other enterprises will be implementing private directories in the cloud and others still, moving off Active Directory (AD) entirely.

If the decision to go ahead with the IAM Cloud solution is taken, then

Step 3 will dig into Role definition for IAM: at this stage the organization has to identify several user roles, and decide which role needs access to what across the organization. The company will also have to build an exception handling policy for this. In addition, the CISO will need to take approval from various process owners for the roles identified and aligned to resource access.

Finally, in Step 4, the organization will take  into account access control, pilot requirements, application migration priorities for IAM integration. A a conceptual IAM architecture will be designed on the basis of functional requirements, identity and access management use cases will be produces and finally  an IAM governance framework will be defined once the outsourcer is chosen.

 

How will a IAM Cloud solution work?

Cloud management platforms are usually a proxy between the users and the cloud management plane. The proxy has access to the entire cloud infrastructure, and users run through the proxy instead of making direct API (Application Programming Interface) calls. You can create all sorts of new workflows and policies in a cloud management platform, such as requiring dual administrator approval before terminating any instance in a particular group. With the platform, you can also patch massive numbers of instances using scripts or automatically check any new instance to make sure it meets certain configuration guidelines.

On the security side, you can manage what is deployed where, what admins and users are able to do, track what they actually did (through granular logs), and you can even insert security controls. For example, you can configure local security agents in new or existing instances automatically or even encrypt storage volumes and manage the keys through the tool (or service). The power of cloud is automation. Thus, the key to cloud security is also automation. These cloud management platforms may be focused on operations, but they also allow you to implement a wide range of security automation that is actually harder to do in a traditional infrastructure.

The following infographic summarizes some key-points of the new Identity and Access Management model (click to enlarge):

Symplified_InfographicDec_20113

 

 

Learn more by reading the following articles:

Identity and access management (IAM) in the cloud: Challenges galore

Identity and access management (IAM) program implementation guidelines

Optimal identity and access management tips for your business

Cloud management platforms key for cloud security

Cloud, Cost and Complexity: Challenging the Traditional Identity Management Model

 

 

Highlights from 2014 Cyber-threats Defense Report and Verizon Data Breach Investigation Report

The 2014 Cyberthreat Defense Report from the Cyberedge group and the 2014 annual Data Breach Investigations Report from Verizon are out.

Those reports are very relevant to identify trends in the Information Security Industry and serve as a guide to prioritize where your Information Security professionals’ attention should go.

In war, knowing your enemy is imperative to establishing an effective defensive strategy. The same holds true for effective IT security, and several excellent industry reports help inform IT security professionals on this front.

The Verizon Annual Data breach Investigation report sheds considerable light on the evolving nature of cyberthreats, the actors behind them, and the techniques being used to perpetrate successful attacks.

The main takeaway from the 2014 report is that the year 2013 may be tagged as the “year of the retailer breach,” but a more comprehensive assessment of the infoSec risk environment shows it was a year of transition from geopolitical attacks to large-scale attacks on payment card systems.

One consistent finding of the report across the years is that nine out of ten of all breaches can be described by nine basic patterns of attack:

  1. POINT-OF-SALE (POS) INTRUSIONS – Remote attacks against the environments where retail transactions are conducted, specifically where card-present purchases are made. Note: crimes involving tampering with or swapping out devices are covered in the Skimming pattern.
  2. WEB APP ATTACKS – All incidents tagged with the action category of Misuse — any unapproved or malicious use of  organizational resources — fall within this pattern. This is mainly insider misuse, but outsiders (due to collusion) and partners (because they are granted privileges) show up as well.
  3. INSIDER AND PRIVILEGE MISUSE – All incidents tagged with the action category of Misuse — any unapproved or malicious use of organizational resources — fall within this pattern. This is mainly insider misuse, but outsiders (due to collusion) and partners (because they are granted privileges) show up as well.
  4. PHYSICAL THEFT AND LOSS – Any incident where an information asset went missing, whether through misplacement or malice.
  5. MISCELLANEOUS ERRORS – Incidents where unintentional actions directly compromised a security attribute of an information asset. This does not include lost devices, which is grouped with theft instead.
  6. CRIMEWARE – Any malware incident that did not fit other patterns like espionage or point-of-sale attacks. We labeled this pattern “crimeware” because the moniker accurately describes a common theme among such incidents. in reality, the pattern covers a broad swath of incidents involving malware of varied types and purposes.
  7. PAYMENT CARD SKIMMERS – All incidents in which a skimming device was physically implanted (tampering) on an asset that reads magnetic stripe data from a payment card (e.g., ATMs, gas pumps, POS terminals, etc.).
  8. DENIAL OF SERVICE ATTACKS – Any attack intended to compromise the availability of networks and systems. includes both network and application layer attacks.
  9. CYBER-ESPIONAGE – Incidents in this pattern include unauthorized network or system access linked to state-affiliated actors and/or exhibiting the motive of espionage.

For each of this categories the Verizon report provides data as to which industries are primarily concerned, the frequency of the attacks, and also mentions key findings and recommendations for controls to put in place.

As you can see below (click to enlarge), Web Application attacks were the most prevalent on last year, but if the last 3 years trend is considered, they are still outnumbered by the Point of Sale attacks:

Verizon charts

Two extremely valuable charts are included at the end of the report in order to make it more “actionable” fot IT Security professionals:

The first contains recommendations for critical security controls mapped to incident patterns (click to enlarge):

Verizon controls

 

The second includes a Prioritization for Security controls by industry based on the frequency of incident patterns within that industry (click to enlarge):

Verizon per industry


 

The Cyberthreat Defense Report informs the IT security community in another, complementary way. Based on a rigorous survey of IT security decision makers and practitioners across North America and Europe, the Cyberthreat Defense Report examines the current and planned deployment of technological countermeasures against the backdrop of numerous perceptions, such as:

  • The adequacy of existing cybersecurity investments, overall and within specific domains of IT
  • The likelihood of being compromised by a successful cyberattack within the next 12 months
  • The types of cyberthreats and cyberthreat sources that pose the greatest risk to a given organization
  • The effectiveness of both traditional and next- generation/ advanced technologies for thwarting cyberthreats
  • The organizational factors that represent the most significant barriers to establishing effective cyberthreat defenses
  • The most valuable solution capabilities and packaging options

By revealing these details the reports wants to provide IT security decision makers with a better understanding of how their perceptions, concerns, priorities, and – most importantly – current defensive postures stack up against those of other IT security professionals and organizations.

The main findings are summarized here below (click to enlarge):

Cyberedge2014 highlights

 

Worth highlighting, in my opinion, are the following:

  • Malware and phishing give IT security professionals the most headaches.
  • Security professionals are more concerned about malicious insiders than cybercriminals.
  • Low security awareness among employees is the greatest inhibitor to adequately defending against cyber-threats (which brings back the discussion of this post)
  • 89% of IT security budgets are rising or holding steady.
  • One in four security professionals doubts whether their organization has invested adequately in cyber-threat defenses.
  • Mobile devices (smartphones and tablets) are perceived as IT security’s “weakest link,” followed by laptops and social media applications.
  • Implementation of bring-your-own-device (BYOD) policies will more than double within the next two years—from 31% in 2014 to 77% in 2016.
  • Only 7% of IT security professionals prefer a software-as-a-service (SaaS) delivery model for their cyber-threat defenses.
  • Over 60% of respondents were affected by a successful cyberattack in 2013

Cloud Computing Security Threats

Cloud Computing is big buzzword these days.

If you want a fairly technical overview of how Cloud Computing works I suggest to read THIS ARTICLE. It is not the objective of this post though.

Organizations are turning to public cloud environments mostly because competitive conditions in most industries require faster delivery and innovative solutions that their existing IT environments and processes can’t support. And besides agility and efficiencies, “considerable cost advantages” are always considered among the main reasons to turn to outsourced, remote services.

Having said that, a realistic feasibility analysis for any Cloud Computing  solution should include a fair share of disadvantages. Some of the most relevant have to do with the Security of the Information kept by the provider. This article from InformationWeek provide us with an overview of the the 9 most relevant Cloud Security Threats:

Cloud computing sec threats

 

The article is based on the findings of a report produced by the CSA – Cloud Security Alliance concerning the top Cloud Computing Security Threats.

The whole report is called The Notorious Nine Cloud Computing Top Threats in 2013 and can be downloaded here: The_Notorious_Nine_Cloud_Computing_Top_Threats_in_2013

 

CSA top security threats report 2013

I thoroughly recommend reading the whole CSA report to all IT Security Professionals. However here is a brief summary of those 9 main Information Security threats:

1. Data Breaches
It’s every CIO’s worst nightmare: the organization’s sensitive internal data falls into the hands of their competitors.
Clouds represent concentrations of corporate applications and data, and if any intruder penetrated far enough, who knows how many sensitive pieces of information will be exposed. If a multitenant cloud service database is not properly designed, a flaw in one client’s application could allow an attacker access not only to that client’s data, but every other client’s data as well.
Unfortunately, while data loss and data leakage are both serious threats to cloud computing, the measures you put in place to mitigate one of these threats can exacerbate the other. Encryption protects data at rest, but lose the encryption key and you’ve lost the data. The cloud provider routinely makes copies of data to prevent its loss due to an unexpected die off of a server. The more copies, the more exposure you have to breaches.

2. Data Loss
Another CIO nightmare. Data loss may occur when a disk drive dies without its owner having created a backup or when the owner of encrypted data loses the key that unlocks it. And a data loss could occur intentionally in the event of a malicious attack. There are many techniques to prevent data loss and, guess what, they occur anyway. The Cloud provider’s personnel can make mistakes too and won’t necessarily be more reliable than your own IT department.

3. Account Or Service Traffic Hijacking
Account hijacking (through Phishing, exploitation of software vulnerabilities such as buffer overflow attacks, loss of passwords and credentials)  sounds too elementary to be a concern in the cloud, but CSA – Cloud Security Alliance – says it is a problem. If your account in the cloud is hijacked, it can be used as a base by an attacker to use the power of your reputation to enhance himself at your expense. The CSA said Amazon.com’s wireless retail site experienced a cross-site scripting attack in April 2010 that allowed the attackers to hijack customer credentials as they came to the site. The alliance offers tips on how to practice defense in depth against such hijackings, but the must-do points are to prohibit the sharing of account credentials between users, including trusted business partners; and to implement strong two-factor authentication techniques “where possible.”

4. Insecure APIs
The cloud era has brought about the contradiction of trying to make services available to millions while limiting any damage all these largely anonymous users might do to the service. The answer has been a public facing application programming interface, or API, that defines how a third party connects an application to the service and providing verification that the third party producing the application is who he says he is. But security experts warn that there is no perfectly secure public API: all of them are subject to breaches. Layers are added to APIs to reach value-added services and increasing complexity adds to the possibility that some exposure exists. Reliance on a weak set of interfaces and APIs exposes organizations to a variety of security issues related to confidentiality, integrity, availability and accountability.

5. Denial Of Service
For cloud customers, “experiencing a denial-of-service attack is like being caught in rush-hour traffic gridlock: there’s no way to get to your destination, and nothing you can do about it except sit and wait. When a denial of service attacks a customer’s service in the cloud, it may impair service without shutting it down, in which case the customer will be billed by his cloud service for all the resources consumed during the attack.

6. Malicious Insiders
With the Edward Snowden case and NSA revelations in the headlines, malicious insiders might seem to be a common threat. If one exists inside a large cloud organization, the hazards are magnified. One tactic cloud customers should use to protect themselves is to keep their encryption keys on their own premises, not in the cloud. Systems that depend solely on the cloud service provider for security are at great risk from a malicious insider attack.

7. Abuse Of Cloud Services
Responsibility for use of cloud services rests with service providers, but how will they detect inappropriate uses? Do they have clear definitions of what constitutes abuse? How will it be prevented in the future if it occurs once? Cloud computing brings large-scale, elastic services to enterprise users and hackers alike. It might take an attacker years to crack an encryption key using his own limited hardware. But using an array of cloud servers, he might be able to crack it in minutes. Or hackers might use cloud servers to serve malware, launch DDoS attacks, or distribute pirated software.

8. Insufficient Due Diligence
Many enterprises jump into the cloud without understanding the full scope of the undertaking. Without an understanding of the service providers’ environment and protections, customers don’t know what to expect in the way of incident response, encryption use, and security monitoring. Not knowing these factors means that organizations are taking on unknown levels of risk in ways they may not even comprehend, but that are a far departure from their current risks. Chances are, expectations will be mismatched between customer and service. What are contractual obligations for each party? How will liability be divided? How much transparency can a customer expect from the provider in the face of an incident?
Also, enterprises may push applications that have internal on-premises network security controls into the cloud, where those network security controls don’t work. If enterprise architects don’t understand the cloud environment, their application designs may not function with proper security when they’re run in a cloud setting, the report warned.

9. Shared Technology
In a multi-tenant environment, the compromise of a single component, such as the hypervisor, exposes more than just the compromised customer; rather, it exposes the entire environment to a potential of compromise and breach. The same could be said other shared services, including CPU caches, a shared database service, or shared storage. The cloud is about shared infrastructure, and a misconfigured operating system or application can lead to compromises beyond their immediate surroundings.

 

DISCUSSION QUESTIONS:

Among all of these threats:

1. Is there anyone you just never thought about before?

2. Which one is the most worrisome for you and why?

Are Information Security executives lacking leadership skills?

Talk to CISOs/CSOs over the past decade about a number of their greatest perceived challenges when it comes to doing their job. More often than not you’ll hear about how their organization’s business leadership didn’t provide them the support and space they need to secure their organizations properly. You’ll often hear these complaints expressed as “security doesn’t get a seat at the table” and the resulting lack of budget.

I am no exception: I have experienced too.

Many businesses view cyber security as an IT problem and not a business problem; however when you consider how dependent businesses are on IT, and more importantly on the information on those systems, businesses need to realize cyber security truly is a business issue. But, ultimately, that convincing comes down to the responsibility of the IT security leaders. They are the ones, after all, responsible for convincing management of the investments that need be made. Business executives owe it to their organization to allocate resources in the best interests of the business. If the security team can’t make the case that involves investment in security, then it’s probably fair to say that the responsibility is on them – not the business executives.

The CSO by definition is responsible for security leadership in the organization, the one responsible for ensuring senior business people, and indeed every user in the organization, understands the importance of information security. If that does not happen, one logical conclusion could be that he/she may be not the right fit for the company; however, sometimes the hierarchy in place may be large part of the issue, as the general theme seems that CISOs/CSOs are first not really true C-level executives in the majority of the cases (they actually report into a CIO or a CFO). According to Ernst and Young’s 2012 Global Information Security Survey, only about one quarter of the companies surveyed have given responsibility for information security to the CEO, CFO or COO—elevating it to a C-suite concern. And only 5 percent have information security reporting to the chief risk officer, the person most responsible for managing the organization’s risk profile.

So the issues to address seem at least 2:

1. Better communications between security pros and upper-executives – Security budgets will become more adequate if we point out to business executives what the potential impact of lack of Security into the business is. IT Security professionals need to boost their confidence when talking to executives and requesting budgets to address security threats. Businesses executives look at specific issues, determine their potential impact on the bottom line, and what needs to be done to manage the issue, and whether or not it is actually worth dealing with the issue. If we accept that the executive board will not be sensitive enough to security threats, we are giving up without playing the game. Perhaps we’re not up to the task.
Executives often do not have a grasp on the state of defenses in an organization because security pros will describe problems in esoteric terms. Security techs also tend to have “a bias that if you don’t speak my techno-lingo, you must not be bright”. CSOs/CISOs must bridge the gap between those.

2. Provide a vision of how to get there – That’s where the real value and CSO leadership comes into play: helping the business decide what areas need the most effort and risk reduction and showing the way to get there. Once more, strong communications skills are needed and also some common sense: if you do not put Security in the context of achievement of business goals, you won’t get many ears listening in an executive board.
One recent Ponemon Institute research has found that the average cost of a data breach within an organization is $5.4 million. But despite that potential loss, nearly half of survey respondents said board-level executives had a “sub-par understanding of security issues”. Here we go: explain to executives using their language: threat size, risk potentials, costs to mitigate, leave technological details aside (rather than in the forefront of your speech) and more than likely your pitch won’t be ignored.

 

Read more about this topic in the following articles:

1. CSO ONLINE – The CSO’s failure to lead

CSO failure to lead

 

 

 

2. CSO Online Survey: execs clueless, security pros unsure in fighting cyberattacks

CSO exec clueless

 

 3. CSO Online – Does your title match your authority?

CSO title match authority

APT – Advanced Persistent Threats – Part 2: Prevention, Detection and Defense

SOPHOS_APT

 

 

The previous post was intended to provide an understanding of what an Advanced Persistent Threat (APT) is. The objective of this one is provide an understanding of what you can do according to the Sophos whitepaper.

PREVENTION:

Where do the hackers find the information they need?
Sophos suggests that social media are a goldmine of information for hackers. Although employees may not post confidential business information, posts about business trips and company events can provide the opening that attackers are looking for. It is imperative that your company implements a social media acceptable usage policy for its employees and that periodic awareness training programs are in place as well.

What to watch for prevention?
First and foremost watch for unpatched systems. Previously known vulnerabilities, both at the OS or (most likely these days) at the Application level can provide enough of an opening for attackers to infect a network. Therefore, it is imperative that all systems are kept up to date with security patches at all times. And if for any reason you cannot do so immediately, those systems are the ones to be watched most closely until you can.

 

DETECTION: 

First thing to monitor is outbound traffic. As mentioned, typically APT must communicate back home. So if you monitor your outbound traffic (what is being sent and where) you can have clues that something “fishy” is going on which has nothing to do with the need of your business.

Other thing to monitor regularly is external port scanning, which is a typical technique which is used to detect what systems a computer has access to that could contain interesting data. Antivirus, application control and intrusion prevention systems can detect a number of malicious port scanning applications.

Also watch for unusual patterns of behavior:  having full real-time reporting capabilities—including historical data—readily accessible can help to identify peaks in traffic to particular hosts or particular data types, such as encrypted files.

 

DEFENSE:

I am worried and I am ready to spend! Is there any software solution which can safely defend me from APT? 
You will hear many vendors saying that only their solution can protect you, and that traditional security solutions such as antivirus systems are obsolete.  But the real answer is that:

No single solution can protect you from an APT.
The best practice is always to have many layers of protection to improve your defenses
against a number of different threats.
• Web exploits, phishing emails and remote access Trojans are all common tools used in APTs. Traditional security systems are an essential part of your toolbox to detect the initial stages of an attack and prevent it from moving to the next stage.

There is no silver bullet to defend against an APT attack, no matter what vendors of specialist systems would like to have us think. Intelligent security practices including an end-to-end strategy are still the most effective protection against advanced and common cyber attacks. The layers of protection you need include (click on picture to enlarge):

APT layers protection

 

A few words on Sandboxing:

Sandboxing is a much-discussed topic in protecting against APTs. A sandbox is either a  physical or virtual secure environment used to run and test unverified code or programs. The sandbox is isolated from any production environment where it could do harm and therefore allows testing and analysis, even for malicious code.

There are 2 types of sandboxing with very different requirements when it comes to administration and performance:

a) Selective sandboxing
Selective sandboxing is used to identify unknown files when they are selected for analysis.
If the file is found to be malicious, a new definition is created and distributed to prevent future infections. Such analysis generally relies on an existing lab infrastructure, although clients can opt out of sending files to the labs. However, sharing anonymized data to improve threat protection can benefit the security community as a whole. Selective sandboxing as part of an existing next-generation firewall solution can greatly improve the level of protection if its implementation is simple and it is fully integrated with other security solutions. Such sandboxing techniques are offered by many leading vendors in the network security space, who generally use a cloud-based infrastructure designed to have a minimal effect on the system performance.
b) Full sandboxing
Some systems focus on forensics and analyze all data in a sandbox. The way in which these systems work is very diverse and it would not be accurate or fair to assess them all together.
In general, they consist of dedicated appliances hosted on-premise and which do not include  other security solutions. When selecting a system that uses full sandboxing there are some general things to consider:
– How much training is required to get the system up and running?
– Is it scalable for your size of business and your requirements?
– Do you have the necessary resources and expertise to effectively implement such a system?
– What other security can the solution offer?
– What effect does the solution have on your overall network performance?
In most cases, a sandboxing solution is not designed to replace a next-generation firewall, so you will still need to implement another network security appliance to provide complete protection.

 

Companies like Sophos and Fire Eye, just to make a few examples, offer modular, layered solutions which can offer the level of APT protection you may be looking for.

APT – Advanced Persistent Threats – Part 1: Definition

SOPHOS_APT

 

This whitepaper produced by Sophos Inc. has the objective to give you an overview of the common characteristics of Advanced Persistent Threats (APTs), how they typically work, and what kind of protection is available to help reduce the risk of an attack.

 

DEFINITION:

First of all, what is an Advanced Persistent Threat? If you do not know you are not at all alone: according to a Ponemon Institute survey dated 2013, no less than 68% og IT Managers do not know what an APT is.

As the name implies, APT consists of three major components/processes: advanced, persistent, and threat. The advanced process signifies sophisticated techniques using malware to exploit vulnerabilities in systems. The persistent process suggests that an external command and control is continuously monitoring and extracting data off a specific target. The threat process indicates human involvement in orchestrating the attack.

In other words, we could say that behind an APT there is a set of very motivated and sophisticated human attackers who will be particularly persistent in pursuing their objective to attack you. The motivations are not just monetary: in fact APT usually refers to a group, such as a government or “hacktivist groups”, with both the capability and the intent to persistently and effectively target a specific entity for a variety of reasons.

The term APT is being commonly used to refer to cyber threats, in particular that of Internet-enabled espionage using a variety of intelligence gathering techniques to access sensitive information, but applies equally to other threats such as that of traditional espionage or attack.

Some common traits of APTs are:

1. Targeted
Attacks are mostly targeted against a particular organization, group or industry. Before
the attack, it may require extensive research on the part of the attackers to collect
intelligence about their target. Such groups of attackers are usually very well funded and
organized. In other words, your enemies tend to know you well.

2. Goal-oriented
The attackers generally know what they want to achieve or access before they get
in. With sufficient intelligence, the attackers will have a number of options to actually
penetrate a network and get to the information or systems they want. They will look for the weakest link in your security systems and find the path of least resistance. Over the past several years, criminal organizations and individual bad actors have found that by taking advantage of poor key and certificate management practices that they can breach trust to infect systems with information-siphoning malware and in some cases even implant weaponized code that can inflict physical damage on facilities.

3. Persistent
Having successfully found a way into a network, the first infected client may not
necessarily be of great interest but is more a means to an end. Once inside, such
attackers are likely to slowly move further into the network and target systems which
have access to more valuable data, e.g., IT administrators or senior executives who have
the credentials to access higher value systems.

4. Patient
Whereas many cyber attacks are designed to wreak havoc by blocking access to systems,
and extracting data, APTs are very likely to initially do nothing. The idea is for the attack
to go unnoticed, and the best way to do that is to avoid attracting attention in the first
place. This non-activity can continue over days, weeks, months or even years.

5. Call home

No attack is complete without some kind of communication to the outside world. At some
point, they will call home. They may do so once the first system has been infected, or
after the data they’ve targeted is located and collated, or when the systems infected have
sufficient access to that data. Communication with the command and control (C&C) host
is generally a repeated process to receive further instructions or to begin extracting data
in bite-sized chunks.

 

The typical APT LIFE-CYCLE is here represented (click on image to enlarge):

APT lifecycle

 

A common myth about APT is that  they only target large enterprises and nation states. It is not the case.

Keep in mind that if your data is valuable to you, then it could be of value to another company, e.g., competitors. Also, Previous APTs have shown that an attack can spread to other organizations that were not
the original target. Large enterprises and government organizations often use a diverse supply chain made up of smaller companies: if you’re one of those suppliers, you could be liable if the data you handle is breached, even if it is not your own. APT expect that smaller suppliers may have weaker security defenses and often decide to attack them mostly as a vehicle to penetrate eventually their real target. If you are working for a smaller company and you are a supplier or a large one (public or private), consider yourself a possible target.

How do attackers get into your systems and compromise your credit card data

Fireeye

 

 

We all know this: Retailers are a favorite target for cybercriminals. Credit card data is a lucrative asset and can be quickly monetized. High-traffic periods such as the holiday shopping season encourage attackers to invest in schemes that can be reused across multiple retailers for maximum profit.

This infographic chart published by Fire Eye Inc. explains in a quick format one frequent strategy used by well-motivated hackers to break into the IT infrastructure of a typical retail business.

Despite 82 percent of retailers reporting compliance with the PCI-DSS standard in 2013, FireEye reports that their consultants responded to an increasing number of retail financial theft incidents. In some instances the attacker maintained access to the compromised systems for up to six months. Once more, real-world experience unfortunately suggests that compliance does not necessarily translates into safety.

What can you do to prevent or mitigate the risk associated with these attacks? Here are the main suggestions reported by Fire-eye consultants:

  • Implement strict network segmentation of the PCI environment: Segment any system that handles cardholder data from the rest of the corporate environment. Require two-factor authentication for access to the PCI environment.
  • Manage privileged accounts: Each system in the PCI environment should have its own unique local administrator password. Employ the principle of “least privilege” to all account and group permissions, including the service accounts.
  • Encrypt cardholder data: Consider a POS  solution with end-to-end asymmetric encryption, starting at the PIN pad reader.
  • Secure endpoints: Ensure that all critical systems in the environment implement application whitelisting. Patch all third-party applications and operating systems. Install an endpoint threat detection and response solution. Consider implementing a file monitoring solution that tracks when files have been created on a system.
  • Actively monitor: Monitor the PCI environment regularly for abnormal activity, such as suspicious logons, creation of unexpected files, or unusual traffic flow.

 

Read more:

INFOGRAPHICfireeye-retail-campaign

RETAIL CASE STUDY: fireeye-retail-casestudy

Websense Security Predictions Report 2014

Every fall, Websense® Security Labs™ researchers predict the key threats your organization should prepare for in the coming year.

Their eight predictions for 2014 are as follows:

  • Advanced malware volume will decrease.
  • A major data-destruction attack will happen.
  • Attackers will be more interested in cloud data than your network.
  • Redkit, Neutrino and other exploit kits will struggle for power in the wake of the Blackhole author arrest.
  • Java will remain highly exploitable and highly exploited — with expanded repercussions.
  • Attackers will increasingly lure executives and compromise organizations via professional social networks.
  • Cybercriminals will target the weakest links in the “data-exchange chain.”
  • Mistakes will be made in “offensive” security due to misattribution of an attack’s source.

The whole report can be read here below:

websense-2014-security-predictions-report

Websense predictions 2014

 

Some of the biggest challenges will come from areas where most security providers aren’t even looking. You can use these insights to review current defenses, identify security gaps and prepare new safeguards.

MAJOR INSIGHT OF THE REPORT:

The quantity of new malware is beginning to decline. Unfortunately, this isn’t good news as it seems.

Cybercriminals will rely less on high-volume advanced malware because over time it runs a higher risk of detection. They will instead use lower volume, more targeted attacks to secure a foothold, steal user credentials and move unilaterally throughout infiltrated networks. Although the volume of attacks will decrease, the risk is even greater because of the increasingly stealthy nature of threats. In many cases, a single entry point into an organization’s network is enough to build up to a complex data exfiltration attack.

In addition to that, if cybercriminals steal user credentials, they can directly access cloud services and mobility infrastructure (e.g., VPN or RDP). This access would allow criminals to establish a presence by creating new domain-level user accounts, without resorting to massive malware distribution.

 

RECOMMENDATION FROM WEBSENSE:

Security teams need a comprehensive security solution that not only detects malware activity, but goes a step further by detecting and protecting against anomalous activity. It’s time to transform security thinking from “setting and forgetting” to using technology that can stop threats by analyzing irregular behavior and sleuthing through the data. Stopping the most advanced, targeted attacks requires amplified information collecting that investigates threat behavior in real time.

 

DISCUSSION QUESTION:

Can you name a commercially available comprehensive security solution such as the one of above?

I will start by mentioning  the Fire Eye platform.

Compliant is not secure – Target CEO fired after data breach

Target CEO

 

A common perspective in most firms is that cyber security is primarily the responsibility of the IT department.  If a data breach incident occurred, the spotlight focused on root causes and the technical fixes needed to remedy the matter. Rarely would such an issue have repercussions for any executive team member and, when it did, the senior IT executive was the only one to take the blame and eventually to lose his seat.

That all changed earlier this month when Target’s CEO Gregg Steinhafel, a 35-year employee of the company with the last six at the helm, resigned in light of the recent holiday-season credit-card security breach that affected 40 million customers. While many speculate about the reasons for his sudden departure, Target’s foray into Canada has not been particularly successful as well,  it’s likely that the data breach incident provided the additional impetus required for the board to request his resignation. One more clue is given by the news that Target also replaced their CIO with Bob DeRodes, an executive with a very strong background in information security.

This should be a harbinger for CEOs and board members of companies large and small.  The cost to Target  for the data will be in the billions by most estimates.  Even for CEOs who do not report to outside boards, the cost of a significant data breach, particularly if not covered by insurance, could cost them their company.

Another, perhaps even more interesting, perspective highlighted by this article is that COMPLIANT DOES NOT MEAN SECURE.  Target, in fact, passed their compliance requirements several months before the breach occurred, but as evidence now clearly shows, they were not secure. Going back in history, perhaps not many readers know that the Titanic was actually compliant with the British board of trade, which required required all boats over 10,000 metric tons to have 16 lifeboats. It didn’t matter how many passengers were on board. Just put 16 lifeboats on. So was the Titanic compliant? Yes. Did compliance avoid a tragedy? No. Read more here.

Titanic

 

Companies must ensure they are secure by going beyond the minimum compliance standards. One way of going beyond that would be employing “White hat” penetration testing companies to actually test their security. And also some common sense should be used too (i.e. We have all the firewall, IDS, IPS in place: fine. But are they configured correctly?).

Many times CEOs and their C-level reports are frustrated because of the lack of appropriate training for them to determine, at the executive level, what the real risk to their company is.  They don’t want to get into the technical details of what the Heartbleed bug does, for example, but they do want to be able to quantify in their mind what their risk is.  With the firing of the Target CEO, that risk is now a personal as well as a corporate risk to members of the executive suite.