By Vanessa Pegueros
DocuSign Chief Information Security Officer

There is an extraordinary amount of money and time spent on detection and response relative to cybersecurity, and much of this conversation is technology focused.  In this series of articles, DocuSign CISO Vanessa Pegueros explores a different aspect of incident response — the human being. She asserts that people ultimately orchestrate incident response and the care and development of employees should be at least as important as the development of technology, and she offers items to consider relative to developing the human elements of incident response.

Part One – Introducing Trauma as a Security Concept

Source: securitycurrent

Security threats can come from anywhere, but they most often occur from the inside. These types of threats are on the rise: in a recent report, 39% of IT professionals admitted they were more concerned about the threat from their own employees than the threat from outside hackers.

In May 2014, the U.S. Department of Homeland Security defined Insider Threat as “a current or former employee, contractor, or other business partner who has or had authorized access to an organization’s network, system, or data and intentionally misused that access to negatively affect the confidentiality, integrity, or availability of the organization’s information nor information systems.”

The potential risks associated with an Insider Threat are particularly disturbing, since Insiders already have the necessary credentials and access to do significant damage to your organization. Traditional data security tools such as encryption are meaningless since Insiders are already authorized to bypass these security barriers in the same way they can use their network credentials to access your sensitive data.

As a recent example, customer records at AT&T Services were accessed by employees who stole information to sell to unauthorized third parties. As a result, in late 2015, AT&T Services had to pay a civil penalty of $25 million to resolve consumer privacy violations.

While we should not ignore the very real danger posed by this type of intentional threat, we must also recognize the role of negligent employees in delivering a similar result. The fact is that the road to a cyberattack is often paved with the best of intentions.

In February 2016, Snapchat announced that one of its employees had responded to a phishing scam, by sharing payroll information with the company’s Chief Executive Officer, or so they thought. Instead, they opened an email sent by an external actor who exploited the employee’s negligence to obtain sensitive information. While it was an honest mistake, the employee’s actions resulted in devastating consequences for the organization as well as the individuals whose data was breached. According to the FBI, this form of business email compromise has cost more than $1.2 billion over the past two years.

Cyberattacks originating from negligent employees are rapidly increasing. Employees have access to sensitive information that, if exposed, could negatively impact their organization. Yet most corporate research and investment on the Insider Threat has focused on those defined by Homeland Security: malicious behavior of purposeful hackers. We need to understand that the Insider Threat is considerably broader.

Contrary to popular belief, Insider Threats should not be restricted to these malicious profiles.  In fact, many would argue that the threat from well-intentioned, negligent employees like the Snapchat case presents a much greater risk. In fact, IT decision makers view the employee as the greatest risk to the security of their organization (46%). Of these respondents, the ‘accidental’ threat outweighed the ‘intentional’ threat by double.

While no one can prevent all Insider Threats, adopting a transparent security policy is a key step in securing employee support while building greater trust between employees and employers. IT should work closely with senior leadership to integrate responsible IT security behavior training, including random user testing, and pre-emptive alerts established to call out unusual activity or access.

Organizations must also implement technology that delivers proactive and intelligence-driven approaches to security to help reduce risk and enable IT to effectively support business initiatives.

The successful prevention of any threat depends on our ability to accurately define and identify it – ideally before it has infiltrated our networks and data.  When addressing the risk of Insider Threats, we must look beyond those who are intentionally doing harm and place equal emphasis on those who are simply doing their job.

About the author: Eric Aarrestad, Senior Vice President, Product Management, leads Absolute’s focus on defining and driving requirements for Absolute’s product portfolio. Under Eric’s guidance, the product management team defines and communicates the product strategy and roadmap for all segments of the business. Eric is a seasoned information security executive, with a proven track record of market impact through building, scaling and growing global cloud, SaaS, mobile, data analytics and security products and services. Eric has worked in enterprise information security for more than 20 years, having previously held leadership positions at Microsoft, HEAT Software and WatchGuard Technologies.

Copyright 2010 Respective Author at Infosec Island
Source: infosecisland

Real-life experiences are often transformed into successful movies, but a piece of ransomware inspired by the Mr. Robot TV series proves that the reverse is also possible. 

The new ransowmare family was named FSociety because it uses an image that appeared in the Mr. Robot show as the logo of an infamous hacking group called FSociety. According to Bleeping Computer, the malware’s creator appears to be a fan of the show, but the ransomware itself is in its early stages of development.

For the time being, the ransomware doesn’t display a ransom note and does not provide users with information on how they can contact the author. Despite that, however, the malware does encrypt users’ files. However, researchers discovered that only a test folder on the Windows desktop is targeted at the moment.

Discovered by Michael Gillespie, the FSociety ransomware is based on the EDA2 educational ransomware that already spawned numerous variants earlier this year. Released in the beginning of 2016, the educational ransomware has been already retired by its developer, Utku Sen.

The same as other EDA2 variants out there, the newly spotted ransomware family was designed to encrypt users’ files using AES encryption. Next, the malware would upload the RSA encrypted decryption key to a command and control (C&C) server.

The new threat is likely to receive improvements shortly, but it remains to be seen what these will be and whether they will improve the code enough to prevent security researchers from cracking it.

Previously, researchers were able to neutralize EDA2-based ransomware fast, because of a backdoor that Utku Sen included in the code. In fact, flaws that were packed in the Hidden Tear’s code allowed security researchers to crack the encryption of this ransomware’s offsprings as well.

Related: Variants Spawn From Hidden Tear Ransomware

Related: Radamant C&C Server Manipulated to Spew Decryption Keys

Copyright 2010 Respective Author at Infosec Island
Source: infosecisland

Over the course of the last 18 months, it has become increasingly evident that organizations need to do more to stop the growing epidemic of security failures and data breaches that are threatening the very ability to conduct business. Customers’ sensitive financial and personal information needs to be protected.

In response, many companies now realize they need to shore up their efforts internally to deal with the attackers that dwell on the inside for months looking for their target. In the process, the sheer number and the targeted specificity of attacks have made it clear that it is impossible for any single company’s IT department to weed through the potential problems and possible attack notifications to find the real threats. Even as they deploy next generation firewalls, endpoint detection and response products that move away from signatures to indicators of compromise (IOC) that promise to close the gap on detection and dwell time exposure, alert fatigue continues to plague many IT security teams.

In order to step up their game, businesses and organizations have been implementing security analytics technologies. The promise of security analytics is that it will do what humans in an IT department cannot – review endless amounts of data and flag what the real threats are you should pay attention to.

Not all security analytics solutions are created equal, however. There are five key characteristics critically important to ensuring that your security analytics are effective and capable of stopping today’s advanced threats.

Extreme Flexibility to Task and Data

Security analytics must be ready and willing to take on any problem presented to it. Strong and useful security analytics has to do more than security software that detects simple intrusions. It must be able to consider everything that potentially could be a problem. To do this it has to be applicable for any source of data – be that a network, device, server, user log, etc. Think a broad amount of use cases.

However, just being able to interface with these information silos is not enough. Security analytics needs to analyze several different features of the data – from metrics like response times or counts, to information coming from users, hosts and agents. It also needs to be smart enough to detect patterns like ‘beaconing’ and high information content in communication packets – and then be able to draw conclusions about them and form insights into what is actually happening and where.

In other words, to be successful, security analytics needs to be able to use every data source, data feature and potential problem laid out in front of it to detect unusual behaviors related to advanced attacks; then analyze them and present results to the user.

Speedy, Accurate, Real-Time Analysis

With true security analytics implemented, the analysis should be fast – giving results in near real-time, making the user feel like it is almost automatic. Speed in processing of data is important when it comes to security issues – as any delays in identifying problems can be quite costly for companies, especially when an active data breach is occurring.

At the same time, while speed in processing is very important – it is second to the most important element of security analytics processing: security analytics needs to understand what it’s looking at and draw conclusions about what is important to the end user.

With an ever-increasing amount of cyberattacks to worry about, it is easy to see how IT managers are overburdened with alerts that flag a potential breach or other issue that needs attention. Many of these issues are not breaches or problems that even warrant immediate (if any) attention; but with most security software that looks at signatures or ill-defined IOCs, everything is flagged so that nothing is missed. This clearly works in the advantage of the attacker that hides in the noise of the environment it is operating within. With alert fatigue being a dominant complaint, it becomes harder and harder for analysts to see through the waves of alerts many advanced detection products emit.

Learns from the Past, Applies to the Future

Here is where machine-learning technology often enters the discussion. There are limits to what typical security tools and a single human end user can accomplish. There are only so many hours in the day to review alerts or notifications – and once you start self-selecting which ones seem important, you are already increasing the possibility that you miss a critical notification. Furthermore, while many companies deploy rule sets within their SIEM to aid in the filtering of highly relevant events, these are limited to a static understanding of “what is problematic” and not nearly as dynamic as a mechanism that could look to identify anomalies based on detected patterns from baselines.

Machine learning helps security analytics take the analysis of potential issues a step beyond seeing something and saying something. With machine learning technology in place, security analytics can now see something, correlate its significance and then ensure that it is only identifying the most important items based on probability scoring on the data.

Machine learning is a critical part of most security analytics – it can recognize and understand patterns, periodicity of data and anomalies within the data, learning from each instance what is a normal behavior and where the outliers are. This helps make it possible for the IT manager to know to act on every alert received based on the analytical scoring relevance – instead of hoping he or she selected the correct ones.

Ability to Scale

Security analytics should have an ability to grow and scale with organizational growth. As businesses become more established and achieve greater levels of success, the amount of data they generate, the amount of customers they have and the size of their operations all grow. This means that the probability of being “targeted” by cybercriminals or hackers grows as well. However, it is not always the biggest customers that are hit first or most often, it is the ones that are the least prepared to prevent and detect the attackers the best.

Security analytics needs to be able to handle all of these instances and scale as required. An increasing amount of data should not faze strong security analytics solutions. On the contrary, more data should add context to an attack and lead to proper identification of an attacker techniques. 

Ease of Deployment and Understanding Results

This last item could easily be separated into two, but they are two sides of the same coin. There are an increasing number of security analytics-based products on the market, with many new entrants coming from adjacent parts of the security space that incorporate analytics (many times because they generate too much data to be useful). Ease of deployment and understanding results comes down to achieving value on the analytics performed.

It is increasingly important to be able to deploy ready-built and defined “recipes” that are relevant to detect intrusions as part of security analytics. This can be a bit of an iterative cycle to “tune” to the kinds of customer data present, but a successful solution will be the one that is the most flexible and aids in the tuning process.

To utilize security analytics, the results need to convey things like attack progression and classification of threats that fit in with the vernacular of the users. This aspect is often lost or left for the customer to consume and display into his/her own dashboards. The assumption made by many vendors is that there is an army of data scientists on staff at each customer that can utilize the results to “tell the story” to the security analyst. This is simply not the case. Therefore, you should look to shorten the time to value and deploy smart, highly tunable security analytics that speak the language of your security team.


The importance of security analytics cannot be overstated, especially as data breaches, unfortunately, continue to dominate the headlines each day and attackers come up with new, targeted means to circumvent prevention technologies. This is why, to be successful, you first have to understand the key elements of security analytics – to make sure what you implement will check off all of the boxes that should be checked off, and you’re not left wondering why your analytics solution isn’t finding everything it should. By implementing a security analytics solution that closely aligns with the five elements above you will be in a better position to short circuit the next attack on your business.

Copyright 2010 Respective Author at Infosec Island
Source: infosecisland