Sunday, July 31, 2016

Feedly:SANS Internet Storm Center, InfoCON: green. Are you getting I-CANNED ?, (Mon, Aug 1st)



from SANS Internet Storm Center, InfoCON: green

One year ago, I already covered the impact that ICANN's latest money grab was having on security, see http://ift.tt/2as7J45. ICANN is the organization that rules the Interwebs, and decides which "top level" domain names can be used. A while back, they decided that they needed more money, and embarked on a "manifest destiny" like trek to discover domain name lands that they could homestead for free, and then sell to the highest bidder.

Thanks to this, we now have generic top level domains (gTLDs) like ".support" and ".shop" and ".buy" and ".smile", in addition to ".com", ".net" & co. Some of these new native lands that ICANN offers seem to be rich in gold or silver, since A LOT OF MONEY is changing hands for the privilege to own one of these freshly plowed plots of cyberspace.

The problem is, most newly arriving settlers are outlaws, and there is no sheriff in town! For example, this past week, most of the redirector pages leading to exploit kits were domiciled under the new gTLDs .top and .xyz.  To add insult to injury, some of the miscreants that register these domains don't even TRY to hide. They use the same name and email address for six, eight weeks in a row.  Once a domain of theirs gets blacklisted by filters, the bad guys already have 10 other domains registered, and they simply relocate.

Two weeks ago, ICANN published their "Revised Report on New gTLD Program Safeguards to Mitigate DNS Abuse", suggesting - at least on the surface - that they are aware of what is going on. But let me share a couple of nuggets:

[...] ICANN and its various supporting organizations have some purview over registration issues through the policy-making and enforcement processes, while use issues are more difficult to confront given ICANN’s limited authority over how registrants use their domain names.  [Translation: Malware TLDs are not our fault]

The ICANN-sponsored survey referenced above reported that consumer trust in new gTLDs is much lower than in legacy TLDs, with approximately 50% of consumers reporting trust in new versus approximately 90% reporting trust in legacy TLDs.  [Translation: Well, DUH! Sometimes, consumers are right!]

[...] New TLD domains are more than twice as likely as legacy TLDs to appear on a domain blacklist—a list of domains of known spammers— within their first month of registration. [Translation: We knew this was going to happen, but lets conduct another study while we rake in the dough]

The report goes on to list the "Nine Safeguards" that ICANN put in place to prevent abuse. All of them make perfect sense. What is glaringly obviously missing, though, is what I would suggest as Safeguard #10: "A registrar where more than 1% of their registered domains, or more than 0.01% of the registered domains per TLD,  end up on a public blacklist (like Google SafeBrowsing) shall receive a warning, and upon reoccurrence within 3 months, have their license to act as a registrar withdrawn by ICANN with immediate effect."

That whole "Oh we can't do anything about how domains are USED" cop-out is utter bull. ICANN raked in piles of $$ in the gTLD land grab, and they can afford to hire auditors who compare the zone files against the public block lists, and take decisive action against the registrars that feed on the bottom. Financial institutions have a FTC enforced "red flag rule" that requires them to know who they do business with, or face the consequences. Why don't registrars?

As as (small) upside, ICANN helpfully publishes a list of all the new gTLD domains. If you are running a corporate web filter, I suggest you simply chuck them all onto the BLACKLIST, no questions asked, and keep them blocked. Fallout will likely be minimal. You can always re-open a specific gTLD once you had 20 or so really worthwhile and business relevant white listing requests for domains under it. Odds are, 95% of the new gTLDs will never reach that threshold. And by blocking them by default, you are bound to keep lots and lots of malware, spam and phishing URLs at bay.

Here's a special shout-out to Charity Baker aka Jaclyn Latonio, who yesterday registered about 200 typo domains like citgibank.com, symanpec.com, jpmoragan.com, etc, showing how such blatantly obvious abuse is not limited to the new gTLDs. Rather, lack of oversight, accountability and enforcement are the core of the system. Makes one wonder where all that money goes. Makes me wonder if ICANN and FIFA (John Oliver / Youtube) have anything else in common.

You are welcome, ICANN. Consider this my public input to your request for comment.

 

 

Feedly:SANS Internet Storm Center, InfoCON: green. ISC Stormcast For Monday, August 1st 2016 http://ift.tt/2ak10Kn, (Sun, Jul 31st)



from SANS Internet Storm Center, InfoCON: green

...

Feedly:SANS Internet Storm Center, InfoCON: green. Sharing (intel) is caring... or not?, (Sun, Jul 31st)



from SANS Internet Storm Center, InfoCON: green

I think almost every one of us working in the IR/Threat Intel area has faced this question at least once: shall we share intel information?

Although I have my own opinion on this, I will try to state some of the most common arguments I have heard in these years, pro and against sharing publicly, as objectively as possible not to influence the reader.

Why not sharing publicly?

  • Many organizations do not share because do not want to give away the information that they (may) have been attacked or breached. On this regard, there are closed trusted groups of organizations within the same sector (e.g. ISAC communities) where the willingness to share in such closed environments increases.
  • Trust is an extremely important factor within the intelligence community, and establishing trust is impossible when sharing publicly. Moreover, by not knowing with whom they are sharing, people are inclined to share less or not to share at all.
  • Part of the community suggests that we should “stop providing our adversaries with free audits”[1], since in many occasions it has been observed a clear change within the TTP after the publications of analysis’ results on blogs or reports.

 

Why sharing publicly? 

  • Relegating everything to sub communities may bring the problem of missing the big picture, since this may tend to create silos on the long term, and organizations relying entirely on them may miss the opportunity to correlate information shared from organizations belonging to other sectors.
  • Many small organizations may not always be able to afford getting access to premium intelligence services, nor to enter in any of these closed sub-communities for several reasons. 
  • Part of the community believes that we should share publicly because bad guys just don’t care and this is also proven by the fact that often times they reuse the same infrastructure and modus operandi.
  • By sharing only within closed groups, those mostly affected would be DFIR people who uses such public information as their source of intel to understand if they have been compromised or not.


What is your view on this?

Pasquale

[1] – “When Threat Intel met DFIR”, http://ift.tt/2am8GtN

Saturday, July 30, 2016

Feedly:SANS Internet Storm Center, InfoCON: green. rtfobj, (Sat, Jul 30th)



from SANS Internet Storm Center, InfoCON: green

Yesterday I mentioned rtfobj.

Philippe told me that version 0.48 will parse the sample I analyzed yesterday. 0.48 is not a stable version (0.47 is), but you can download it from Github.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

Friday, July 29, 2016

Feedly:Understanding Java Code and Malware | Malwarebytes Unpacked. PUP Friday: Cleaning up with 5 star awards



from Understanding Java Code and Malware | Malwarebytes Unpacked

Systweak’s RegClean Pro is quite a popular software. Top Ten Reviews, a consumer review portal based in Utah, has ranked it as number one in their “Registry Repair Software” category. It also boasts of having won more than a hundred 5-star awards. Yet in spite of these, something is amiss. With praises for it also come criticisms. And we’ve seen a lot of them.

What is RegClean Pro?

It is a piece of software that markets itself as a registry cleaner and optimizer in order to improve the performance of the PC. It does this by removing redundant keys and/or entries from the Windows registry.

RegClean Pro arrives on user systems either as a downloaded file from www[DOT]systweak[DOT]com/registry-cleaner/, or as a program bundled with other free third-party software. The sample we’re using for this post has an MD5 hash value of 5b8e73834ad13039e7f9bc0338b4a946.

Although Systweak caters to various operating systems, RegClean Pro in particular can only be downloaded and used by Windows users.

regclean-pro-file

What happens when you install RegClean Pro?

Upon execution, RegClean Pro attempts to fingerprint the machine it is being installed on by looking up the user’s Windows account name and the computer name. It does this behind the scenes while showing the usual software GUI that users are expected to see. Below is a slideshow of these interfaces in succession:

Click to view slideshow.

It then opens the default browser to display the following “Thank you” message:

regclean-pro-ty

It finally creates the following scheduled tasks, which enables it to further execute at certain times of the day:

regclean-pro-tasksched

Below is RegClean Pro’s shortcut after it finished installing:

regclean-pro-shortcut

For the purpose of displaying how RegClean Pro performs, below is a slideshow of its interfaces (also in succession) after it executed by itself opened the “Thank you” page above:

Click to view slideshow.

Notable files and/or folders added:

  • C:\Program Files (x86)\RegClean Pro\Cloud_Backup_Setup.exe
    • detected as PUP.Optional.MyPCBackup
  • C:\Program Files (x86)\RegClean Pro\Cloud_Backup_Setup_Intl.exe
    • detected as PUP.Optional.MyPCBackup
  • C:\Program Files (x86)\RegClean Pro\unins000.exe
    • detected as PUP.Optional.SysTweak

Anything off with RegClean Pro’s End-User License Agreement?

For software that claims to clean the registry in order to improve PC performance, we find it quite odd to see the below bit in its EULA (emphasis ours):

NO PERFORMANCE WARRANTY. SYSTWEAK specifically disclaims any warranty for the amount
of performance increase or utility provided by the SOFTWARE PRODUCT. By purchasing
this software and accepting this EULA you specifically agree that you understand
that no representation or warranty is made by SYSTWEAK that the SOFTWARE PRODUCT
will necessarily increase performance or provide a utility benefit on your computer,
and that no claim of specific deficiency, defect, or underperformance has been made
with respect to your computer. Any claims of performance increases or utility made
for the software are those of possible or potential improvement or utility, and n
warranty is offered that a specific utility or amount of performance increase, if
any, will be realized on any particular computer. Each computer is different and
the scenarios under which they are used are different, and no claim is made that
any one computer or usage scenario shall see a performance increase or utility
benefit from the SOFTWARE PRODUCT. Your sole remedy for any dissatisfaction with
the presence of or the degree or amount of performance improvement or utility shall
be limited to the customer remedies described above.

Here’s another bit that we want to highlight in case you have used RegClean Pro and wish to hold Systweak responsible for the uncorrectable changes the software made to your system (emphasis ours):

BACKUP RESPONSIBILITY. The SOFTWARE PRODUCT is a system utility, and as such can
make irreversible changes to the state of computer on which it is run and that
SYSTWEAK cannot accurately predict or ensure the outcome in all possible scenarios,
and therefore purchaser agrees to make and test a complete system backup and backup
of all personal information before operating the SOFTWARE PRODUCT. You agree that
you accept all responsibility for reversing or correcting any changes made by the
SOFTWARE PRODUCT.

Does Malwarebytes Anti-Malware (MBAM) detect RegClean Pro?

We detect the installer the RegClean Pro installer as PUP.Optional.RegCleanerPro. For its other component files, we detect as PUP.Optional.RegCleanPro. You may refer to our forum page in case you’re interested in knowing what these component files are and other technical details.

Conclusion

Systweak, the India-based developer of RegClean Pro, boasts of being a Microsoft Gold Partner. Some dodgy companies do this, too, but in Systweak’s case, they indeed are an MS Gold Partner. For some users, a partnership with a tech giant is enough to convince them to try out a third-party software. Consumers expect quality products and services because of this. In the end, however, many are let down, realizing that what they get is a PUP.

We have reported this company to Microsoft so they can open an investigation and hopefully consider revoking Systweak’s Gold partnership status.

As for registry cleaners, we generally consider them as digital snake oil, so I wouldn’t touch it with a barge pole if I were you.

More PUP Friday posts:

Jovi Umawing (Thanks to Pieter for the assist)

 

Feedly:Understanding Java Code and Malware | Malwarebytes Unpacked. Unpacking yet another .NET crypter



from Understanding Java Code and Malware | Malwarebytes Unpacked

In this post, we will study one of the malicious executables recently delivered by RIG Exploit Kit. It is packed in a .NET cryptor and includes similar features as one that we described some time ago (here). Similar packers are widespread and commonly used to protect various malware samples, that’s why it is worth to know their common building blocks and methods of defeating them.

Analyzed sample

The interesting fact about this sample is that it comes signed:

signed_file

Unpacking

The executable is written in .NET so we can decompile it using some of the popular tools made for this purpose (.NET Reflector, dnSpy, etc).

obfuscation

As we can see, the code is obfuscated – functions have garbled, meaningless names. Also the code inside contains lot of junk instructions and is difficult to follow. Even applying a known tool for .NET deobfuscation (de4dot) didn’t helped much.

de4dot

Anyway, let’s start by finding the possible payload that is going to be unpacked.

resources

Looking at the resources we can see one element that looks like a distorted PE file:

distorted_pe

It is loaded and processed by the following function:

loading

Using dnSpy we can set breakpoint at the end of this function, run it and dump the output buffer.

dump_result

The dumped binary turned out to be another PE file written in .NET (3a5cc47413cd815b44a0329100e552da). However, it is not the malicious payload that we are looking for, but just another element of the crypter – a loader. It unpacks the real payload and injects it in another binary using RunPE technique (also known as process hollowing).

loader_view

The loader is not independent – it relies on resources from the previous file. We can see from the code that the resource “varitoyp” contains a set of parameters. It is decrypted by a function DeCrypt, using a word “params” as the decryption key:

load_params

The real payload is hidden inside of another encrypted resource. The name of the file, as well as the decryption key is included in the parameters that are decrypted in the previous step:

run_payload

The payload can be injected into one of the predefined executables: vbc.exe, RegAsm.exe, AppLaunch.exe, notepad.exe – or, eventually, its own process. The choice is made based on one of the parameters from the encrypted set.

The decryption algorithm is custom XOR based:

DeCrypt

Using a copy of this function we can easily decrypt the dumped resources from the initial binary. We were able to reconstruct a sample decoder, you can find the python script here: msil_dec.py.

Decrypting parameters:

./msil_dec.py --file varitoyp --key params
0|0|0|0|0|0|0|0|0|0|10000|0|0|0|0|0|0|0|0|0|0|0|LgunkLBEWL7f5asOISuri|0|0|0|0|0|0|nTEVmryG9b8grLtmS06bryl0|ZxjmzvjUYrFNhuAOygWpbtsxcVZ|6|0|

The parameters are in the form of a string, containing values separated by a delimiter. Parameters 30 and 31 contains the name of the resource hiding the encrypted payload and the key.

nTEVmryG9b8grLtmS06bryl0, //binary
ZxjmzvjUYrFNhuAOygWpbtsxcVZ, //key

The encrypted executable is stored in the resources of the initial binary:
binary

So is the key:
key

Decrypting:

./msil_dec.py --file nTEVmryG9b8grLtmS06bryl0 --key ZxjmzvjUYrFNhuAOygWpbtsxcVZ > payload.exe

As a result we get the final payload (07a08cf5211665dfcd090e7bab6c8608) – it is a Neurevt Bot, used for DDoS attacks (read more here).

commands

Conclusion

This cryptor probably shares some code with the previous described one – it might even be the work of the same authors. Again, we see a loader with another PE file packed inside. Also, again there is an array of parameters. Finally, the list of the applications where the payload is injected is exactly the same in both cases. In the previous cryptor, a BMP file was used to hide encrypted data (configuration and the final payload). This time authors gave up applying any steganographic tricks.

After almost a year from the previous release, we cannot say that the product evolved to something more complex. Instead – we see the same ideas, however mutated and implemented differently.


This was a guest post written by Hasherezade, an independent researcher and programmer with a strong interest in InfoSec. She loves going in details about malware and sharing threat information with the community. Check her out on Twitter @hasherezade and her personal blog: http://ift.tt/1R6Y8zL.

Feedly:Threats RSS Feed - Symantec Corp.. Infostealer.Rultazo



from Threats RSS Feed - Symantec Corp.

Risk Level: Very Low. Type: Trojan.

Feedly:Darknet – The Darkside. fping 3 – Multi Target ICMP Ping Tool



from Darknet – The Darkside


a Show systems that are alive.

A Display targets by address rather than DNS name.

b n Number of bytes of ping data to send. The minimum size (normally 12) allows room for the data that fping needs to do its work (sequence number, timestamp). The reported received data size includes the IP header (normally 20 bytes) and ICMP header (8 bytes), so the minimum total size is 40 bytes. Default is 56, as in ping. Maximum is the theoretical maximum IP datagram size (64K), though most systems limit this to a smaller, system-dependent number.

B n In the default mode, fping sends several requests to a target before giving up, waiting longer for a reply on each successive request. This parameter is the value by which the wait time is multiplied on each successive request; it must be entered as a floating-point number (x.y). The default is 1.5.

c n Number of request packets to send to each target. In this mode, a line is displayed for each received response (this can suppressed withq orQ). Also, statistics about responses for each target are displayed when all requests have been sent (or when interrupted).

C n Similar toc, but the per-target statistics are displayed in a format designed for automated response-time statistics gathering.

shows the response time in milliseconds for each of the five requests, with the "−" indicating that no response was received to the fourth request.

d Use DNS to lookup address of return ping packet. This allows you to give fping a list of IP addresses as input and print hostnames in the output.

D Add Unix timestamps in front of output lines generated with in looping or counting modes (l,c, orC).

e Show elapsed (round-trip) time of packets.

f Read list of targets from a file. This option can only be used by the root user.

-g Generate a target list from a supplied IP netmask, or a starting and ending IP. Specify the netmask or start/end in the targets portion of the command line.

h Print usage message.

i n The minimum amount of time (in milliseconds) between sending a ping packet to any target (default is 25).

l Loop sending packets to each target indefinitely. Can be interrupted with Ctrl-C; statistics about responses for each target are then displayed.

m Send pings to each of a target hosts multiple interfaces.

n Same asd.

p <n> In looping or counting modes (l,c, orC), this parameter sets the time in milliseconds that fping waits between successive packets to an individual target. Default is 1000.

q Quiet. Dont show per-probe results, but only the final summary. Also dont show ICMP error messages.

Q n Likeq, but show summary results every n seconds.

r n Retry limit (default 3). This is the number of times an attempt at pinging a target will be made, not including the first try.

s Print cumulative statistics upon exit.

S addr Set source address.

I if Set the interface (requires SO_BINDTODEVICE support)

t n Initial target timeout in milliseconds (default 500). In the default mode, this is the amount of time that fping waits for a response to its first request. Successive timeouts are multiplied by the backoff factor.

T n Ignored (for compatibility with fping 2.4).

u Show targets that are unreachable.

O n Set the typ of service flag ( TOS ). n can be either decimal or hexadecimal (0xh) format.

v Print fping version information.

H n Set the IP TTL field (time to live hops).

Feedly:We Live Security. Get rid of these undesirable ‘friends’ on Facebook



from We Live Security

In celebration of the International Day of Friendship, we want to help you spot undesirable 'friends' on Facebook.

The post Get rid of these undesirable ‘friends’ on Facebook appeared first on We Live Security.

Feedly:SANS Internet Storm Center, InfoCON: green. Malicious RTF Files, (Fri, Jul 29th)



from SANS Internet Storm Center, InfoCON: green

About a year ago I received RTF samples that I could not analyze with RTFScan or rtfobj (FYI: Philippe Lagadec has improved rtfobj.py significantly since then). So I started to write my own RTF analysis tool (rtfdump), but I was not satisfied enough with the way I presented the analysis result to warrant a release of my tool. Last week, I started analyzing new samples and updating my tool. I released it, and show how I analyze sample 07884483f95ae891845caf0d50ce507f in this diary entry.


This sample is an heavily obfuscated RTF file. RTF files are essentially sets of nested strings that start with { and end with }. Like this (strongly simplified):

{\rtf {data {more data}}}.

Malicious RTF files contain a payload. Objects in RTF files are embedded in hexadecimal, like this (strongly simplified):
{\rtf {data
{\*\objdata
01050000
02000000
08000000
46696C656E616D6500000000000000...
}}}

Malicious RTF files obfuscate the hexadecimal data in many ways, one of them is to put extra control strings inside the hexadecimal data, like this:
{\rtf {data
{\*\objdata
01050000
02000000
08000000
46696C656E61{\obj}6D6500000000000000...
}}}

The sample I analyzed takes this to the extreme. After each hexadecimal digit, extra control strings and whitespace are inserted:


(I removed a lot of whitespace to be able to put several hexadecimal digits on the screen).
The hexadecimal digits (highlighted in red) are 01050…

My tool outputs a line of analysis data for each nested string. In this sample, because of the obfuscation, there are a lot of them (22956, which is gigantic for an RTF file).

But you can reduce the output by filtering for entries that (potentially) contain an embedded object using option -f O:

Entry 165 is the one we will take a closer look at first. The information presented for entry 165 is the following: the nesting level is 4, it has 1 child (c=), starts at position 2ae5 in the file (p=), is 1194952 bytes long (l=), has 11429 hexadecimal digits (h=), has no \bin entries (b=), contains an embedded object (O), has 1 unknown character (u=) and is named \*\objdata133765.

We can select entry 165 for closer analysis:

I highlighted the hexademical digits in red.

To decode the hexadecimal data, we use option -H:

You can see the hex data clearly now: 01 05 00 ...

Since this is an embedded object, we use option -i to get more info on the object:

From the magic header, we see that the embedded object is an OLE file (FYI: if we analyze it with oledump, we get parsing errors).

Looking further into the stream (-H), we see stream entries in the output:

And a bit further, we even find a URL:

Taking a closer look, I don't only see a URL, but hex data that looks like shellcode.

We can select this shellcode by cutting if out of the stream (option -c):

And of course also dump it to a file (option -d), so that we scan analyze it with the shellcode analyzer from libemu:

So this RTF file is a downloader.

The presence of shellcode in an RTF file is often an indication of an exploit. rtfdump supports YARA (like many of my *dump tools):

The first YARA search doesn't find anything. But the second search with option -H (to decode the hexadecimal content to binary) has hits for my RTF_ListView2_CLSID YARA rule. This indicates that entry 165 contains a byte sequence for the ListView2 classid, so this is very likely an exploit for vulnerability CVE-2012-0148 in this ListView.

The set of samples I looked at last week are characterized by the following properties:

they start with {\rtfMETAX

they end with this:

If you have interesting tools or techniques to analyze RTF files, please post a comment.

Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com

Feedly:Security News - Software vulnerabilities, data leaks, malware, viruses. WikiLeaks, NSA leaker Edward Snowden clash on Twitter



from Security News - Software vulnerabilities, data leaks, malware, viruses

National Security Agency leaker Edward Snowden and transparency website WikiLeaks are clashing over how best to handle the publication of sensitive data, a spat played out over Twitter.

Feedly:We Live Security. The 10 Security Commandments for every SysAdmin



from We Live Security

Celebrating the 17th annual SysAdmin Day, we recognize their dedication and workplace contributions and want to show appreciation for their talent.

The post The 10 Security Commandments for every SysAdmin appeared first on We Live Security.

Feedly:We Live Security. Security professionals ‘extremely concerned’ by cybercrime threat



from We Live Security

Security professionals are more concerned than ever by the threat of cybercrime, according to new poll.

The post Security professionals ‘extremely concerned’ by cybercrime threat appeared first on We Live Security.

Feedly:Understanding Java Code and Malware | Malwarebytes Unpacked. The IPExpo / Infosec Europe / Blogger Awards roundup



from Understanding Java Code and Malware | Malwarebytes Unpacked

For the last few months, Malwarebytes has been representing at some of the biggest security events in England, along with a couple of additional happenings thrown in for good measure.

In May, we paid a visit to Manchester for IP Expo. Held in a converted railway station, security talks galore sat alongside the usual assortment of booths and giveaways.

I took part in a Keynote debate which was pretty much an open-mic session with the audience – no set topic or gameplan, just “Ask us things and see what happens”. As it turns out, we had a great time with occasional sparks flying. Fellow panelists included Paul Ducklin from Sophos and Lee Barney from M&S.

A fun time was had by all, even if I did consider just moving out of the way so Paul and Lee could crank up the no holds barred approach. Topics included liability for data breaches, regulatory issues, home security and potential drawbacks to various Government practices involving data and biometrics. You can watch the full thing below:

A week later, the Malwarebytes convoy rolled into the Chelsea FC stadium and combined a tour of the ground (which I missed – whoops) with a number of talks about popular threats doing the rounds at the moment. For my part, I highlighted some of the dangers related to Malvertising and the increasing headache of CFO fraud. Also, the football ground is a nightmare to reach and I’m here if you need to talk to me about “Why is this train going the wrong way” woes.

 

A brief pause until June, and the juggernaut that is Infosec Europe. The aforementioned event is one of the biggest security trade shows of the year, taking place alongside the ever awesome BSides London. The Malwarebytes team spent 3 days on the booth, dispensing security related information, daily talks, and a whole pile of t-shirts. Last but not least, we also had an awesome robot to hand for all your photography related needs:

I gave a talk on all three days on some of the biggest threats currently causing headaches for businesses and people at home (and indeed, people who run a business FROM home. You don’t get off the hook from scammers that easily!)

One of the days was streamed on Periscope and recorded, and if you’re interested please feel free to take a look. All the usual caveats – incredibly noisy space, people talking, recorded on someone’s phone, I’m yelling over the sound of someone smashing up servers with a sledgehammer – no, really – apply. A Kurosawa movie, this isn’t:

http://ift.tt/2aBfoBQ

The topics included Malvertising, Ransomware (on desktop and mobile), and CFO fraud / CEO fraud / BEC scams / Business Email Compromise / whatever random name it’s being called this week. I tend to stick with CFO fraud because out of all the alternatives, it’s the only one I can consistently remember without grasping at thin air for word cues.

Also, this happened:

Sadly, The Queen doesn’t generally walk around security conferences shaking hands with people but as fake Queens go, that’s a pretty good one.

On day two, I spoke to Dan Raywood about the perils of Ransomware and some of the new directions this persistent piece of Malware may end up moving in. Bonus points to the still image Youtube is using for the clip, as it makes me look like a thoroughly entertaining chap doing an “And another thing…” routine.

Once a second day of working the booth was over, some of the Blog Team made our way to the 2016 Infosec Blogger Awards which took place at – you’ve guessed it – Infosec Europe 2016. There was some tough competition present across all categories, from Sophos and Bitdefender to Troy Hunt and Heimdal Security. We’ve been fortunate enough to win some awards previously [1], [2], so the pressure was indeed on.

The Awards themselves celebrate the hard work done by researchers, bloggers, podcasters and videomakers over the last 12 months – and there were certainly a lot of candidates to choose from. Would Graham Cluley nab the best video award? Could a newcomer win the best personal blog? How about Brian Honan adding to his Hall of Fame induction with a best EU Tweeter award?

Well, wonder no more. I’m pleased to announce that as a result of your votes, Malwarebytes Labs won Best Corporate Blog 2016!

2016 best corporate blog: Malwarebytes

Photo © Infosecurity Europe 2016

Here is the full list of results:

Best Corporate blog – Malwarebytes Labs Blog

The Best European Corporate Security Blog – Sophos Naked Security

Best European Security Podcast – Securing Business

Best Security Podcast – Risky Business

Best Security Video Blog – Graham Cluley

Best Personal Security Blog – Jack Daniel’s Uncommon Sense Security

Best European Personal Security Blog – Security Affairs

Most Entertaining Blog – Troy Hunt

Most Educational Blog – Heimdal Security

Best New Security Blog – Info Sec Guy Blog

Best EU Security Tweeter – Mikko Hypponen

Grand Prix Prize for Overall Best European Security Blog – Bitdefender Hot for Security

We do our very best to provide you with the latest breaking news regarding exploits, scams, malvertising, and more besides – and we’ll continue to do so over the next 12 months.

All in all, it’s been a busy few months for the team and I’m happy to report everything went according to plan. The next time you’ll likely hear me talking about things on a stage will be VB 2016 in Denver, Colorado alongside my good colleague Jerome Segura on the subject of Malvertising.

For now, I’ll leave you with a radio ramble about the attraction of Pokemon GO for scammers, because we can all do with a bit of creature-collecting security advice in our lives. Thanks again for your votes, and safe surfing!

Christopher Boyd

Thursday, July 28, 2016

Feedly:SANS Internet Storm Center, InfoCON: green. ISC Stormcast For Friday, July 29th 2016 http://ift.tt/2aA59hq, (Fri, Jul 29th)



from SANS Internet Storm Center, InfoCON: green

...

Feedly:Fortinet Blog | News and Threat Research - All Posts. RIoT Control – What Are the “Things” in the IoT?



from Fortinet Blog | News and Threat Research - All Posts

This is the second in a series of blogs written as a companion to my forthcoming book, RIoT Control – Understanding and Managing Risk and the Internet of Things What Are the “Things” in The Internet of Things? User-based devices that communicate, consume content, and create and publish content for other people to consume have dominated our current version of the Internet. The developing Internet of Things is about to change that. While it will include the “old” Internet of user-based devices, it is very different...

Feedly:We Live Security. ISF publishes major update to its information security guide



from We Live Security

The Information Security Forum (ISF) has published a major update to its Standard of Good Practice for Information Security.

The post ISF publishes major update to its information security guide appeared first on We Live Security.

Feedly:Understanding Java Code and Malware | Malwarebytes Unpacked. Five ways to stay safe online while playing Pokémon Go



from Understanding Java Code and Malware | Malwarebytes Unpacked

Ah, Pokémon Go. Most of us have heard about it, played it, and (probably for some) been concerned by it.

Since its release in early July, the game has been part of headlines for weeks: from how it caused a resounding buzz in North America, Australia, Europe, and Japan to how they’re blamed for accidents, local crimes, and (in a rare case) even death.

No one has expected to see a mobile gaming app become so popular so fast and affect people the way it has. Indeed, the introduction of Pokémon Go—plus the sharp rise of popularity of augmented reality—has opened a lot of opportunities for cross-industry innovation and growth. Unfortunately, it’s not all fun and games for every player and those caught in the experience of others.

What we have below are surefire ways one can play Pokémon Go safely while avoiding potential threats online:

  • Make sure that the Google or Apple account you’re using to login to Pokémon GO uses a strong password. Furthermore, make sure two-factor authentication for it is enabled. We say this a lot that it has become a general, basic tip for users with an online account. This means that this tip is not just for Pokémon Go accounts but also for any of your accounts.
  • Avoid downloading and installing unofficial versions of the app and/or other apps that claim to be a kind of “helper”. First of all, several unofficial versions of Pokémon Go in the marker are found to be malicious. According to a research by security company ESET, one of the apps they spotted is capable of locking a smartphone’s screen and runs in the background to click ads on adult sites. Other apps that claim to correspond with Pokémon Go, such as those that promise to increase Pokécoins, function more like scareware.
  • Avoid visiting sites that promise free goodies for Pokémon Go. Days after the said game drove players out of their houses for long walks, our researchers have started picking up online scams using the said game as bait. If you’re an avid reader of this blog, you’ll know that scammers normally bank on the current hot trends. Game scams usually promises hacks, cheats, and other freebies. Don’t bother with them as they’re normally survey scams.
  • Never share your credentials with anyone. Yes, that includes your closest family and family members. As we’ve seen in a previous study, kids in the EU are susceptible with this kind of behavior. However, teens and adults might be tempted to do the same, thinking that it’s harmless. What they probably don’t realize is that the credentials one uses to login to their Pokémon Go account could be used to access their other accounts like Facebook or Twitter.
  • Use prepaid or gift cards when buying in-app goodies instead of your debit or credit card. Sometimes, it can’t be avoided to purchase some digital goods that you may need to catch them all. Gift cards can be your safe alternative, if you’re uncomfortable using your bank card for such transactions. Thankfully, both Google and Apple offer these.

A word about selling Pokémon Go accounts

Several reports of Go players selling their accounts on eBay, Craigslist, and even Facebook after they’ve significantly leveled up or simply decided they can’t keep playing because of other priorities has begun appearing in the news recently.

Although this may sound like a logical step forward for gamers after having their fun, American software company Niantic Labs, creators of Pokémon Go, has made it clear in their Terms of Service that they forbid users to “sell, resell, rent, or lease the App or your Account.” Below is the copied version of that section for your reference:

Conduct, General Prohibitions, and Niantic’s Enforcement Rights

You agree that you are responsible for your own conduct and User Content while using
the Services, and for any consequences thereof. Please refer to our Trainer
Guidelines (http://ift.tt/2azQQsN) for information
about the kinds of conduct and User Content that are prohibited while using the
Services. By way of example, and not as a limitation, you agree that when using the
Services and Content, you will not:

* use the Services or Content, or any portion thereof, for any commercial purpose or
for the benefit of any third party or in a manner not permitted by these Terms,
including but not limited to (a) gathering in App items or resources for sale
outside the App, (b) performing services in the App in exchange for payment outside
the App, or (c ) sell, resell, rent, or lease the App or your Account;

Although we’re not obligated to monitor access to or use of the Services or Content
or to review or edit any Content, we have the right to do so for the purpose of
operating the Services, to ensure compliance with these Terms, and to comply with
applicable law or other legal requirements. We reserve the right, but are not
obligated, to remove or disable access to any Content, at any time and without
notice, including but not limited to, if we, at our sole discretion, consider
any Content to be objectionable or in violation of these Terms. We have the
right to investigate violations of these Terms or conduct that affects the
Services. We may also consult and cooperate with law enforcement authorities to
prosecute users who violate the law.

Happy gaming and stay safe out there!

Other related post(s):

Jovi Umawing

Feedly:Security News - Software vulnerabilities, data leaks, malware, viruses. Researchers raise more than two million dollars to rethink cybersecurity



from Security News - Software vulnerabilities, data leaks, malware, viruses

Is antivirus software already dead? That's certainly what George Candea believes, and he's not the only computer security expert who says so. "Large enterprises and government agencies often deploy antivirus software to satisfy legal obligations or to meet contractual requirements, not because they really believe that the software can defend them," says George Candea. Together with some of his former PhD students, the EPFL professor founded Cyberhaven, a startup that is developing a brand new approach to computer security. And their results are promising. In a third party test, their solution warded off all 144 cyber attacks that had been hand-crafted by professional penetration testers, whereas so-called heuristic modern security products caught just over 20 of them. As for the best classical antivirus software tested, it only caught one. "I think it just got lucky!," muses the researcher.

Feedly:Security Intelligence | TrendLab.... Law Enforcement and the Deep Web: Willing, but Underfunded



from Security Intelligence | TrendLab...

As everyone knows by now, there have been some recent attacks in Germany that have people worried about their security. One question that comes up is this: how did the attackers obtain their weapons?

In the recent Munich shooting, the attacker obtained his gun (a Glock 17 pistol) from an underground market. I was interviewed by the Handelblatt newspaper about underground markets.

It shouldn’t be a surprise that the attacker was able to buy a weapon online. Deep Web sites are not particularly difficult to find or access if the user is sufficiently determined to do so. Of course, some of the advertisements are fraudulent–but with enough patience, someone in these markets (whether it be a criminal or a terrorist) can obtain the illegal goods he wants.

The importance of going after these sites is obvious. So one may ask: why don’t the police do it? It’s not for a lack of trying, but the reality is that the police don’t have the resources to do so. People imagine that the police have unlimited resources to catch criminals–that may be true in some cases, but not cybercrime.

Many police departments don’t have the in-house expertise or resources to police the Deep Web as well as it is meant to be policed. This is despite how more and more illegal online activity goes on, making this kind of ability more important in solving crimes. The police want to police the Deep Web, but around the globe cybercrime units do not have the resources they need.

Some politicians would like to ban the Deep Web entirely, or keep it under tight surveillance. Saying so is short-term populism, and not realistic. The Deep Web is not inherently evil. It isn’t. Anonymity has its place online. Certainly, for dissidents living under dictatorial regimes, it’s very useful. There’s nothing inherently good or bad about the Deep Web – it’s a tool, like any other.

Law enforcement is aware of the importance of the Deep Web and they are working to plug this gap in their capabilities. However, the private sector is still ahead of the game. Both researchers and the police will benefit by working together in this area.

Trend Micro works with law enforcement agencies from all over the world to help monitor the Deep Web. This includes several State Offices of Criminal Investigation and the Federal Office of Criminal Investigation, some of which are investigating these recent incidents. We are committed to continue to help law enforcement agencies everywhere build up their capacity to investigate online activities just as easily as they can do offline ones. We have published multiple papers discussing the results of our research efforts into these unseen corners of the Internet

Post from: Trendlabs Security Intelligence Blog - by Trend Micro

Law Enforcement and the Deep Web: Willing, but Underfunded

Feedly:We Live Security. 5 highlights from the ‘information security Olympic Games’



from We Live Security

In the spirit of this year’s Olympics, which is being held in Rio de Janeiro, we thought we’d host our own little information security Olympic Games.

The post 5 highlights from the ‘information security Olympic Games’ appeared first on We Live Security.

Wednesday, July 27, 2016

Feedly:SANS Internet Storm Center, InfoCON: green. Verifying SSL/TLS certificates manually, (Thu, Jul 28th)



from SANS Internet Storm Center, InfoCON: green

I think that we can surely say that, with all its deficiencies, SSL/TLS is still a protocol we cannot live without, and basis of today’s secure communication on the Internet. Quite often I get asked on how certificates are really verified by browsers or other client utilities. Sure, the canned answer that “certificates get signed by CA’s and a browser verifies if signatures are correct” is always there, but more persistent questions on how it exactly works happen here and there as well.

So, if you ever wondered on how a certificate could be fully manually verified by checking all the steps, this is a diary for you! In this example we will manually verify the certificate of the site you are reading this diary on, https://isc.sans.edu. We will use the openssl utility so you can replicate all the steps for any certificate on any machine where you have openssl. Here we go.

In order to get the certificate we want to verify we can simply connect to https://isc.sans.edu with the openssl utility. For that, the s_client command will be handy and it will print out the certificate in PEM format on the screen so we just have to catch it and put it into a file:

$ openssl s_client -connect isc.sans.edu:443 < /dev/null | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > isc.sans.edu.pem

The isc.sans.edu.pem file now contains the certificate from isc.sans.edu. We could try to verify it with openssl directly as shown below:

$ openssl verify -verbose isc.sans.edu.pem
isc.sans.edu.pem: C = US, postalCode = 20814, ST = Maryland, L = Bethesda, street = Suite 205, street = 8120 Woodmont Ave, O = The SANS Institute, OU = Network Operations Center (NOC), OU = Unified Communications, CN = isc.sans.edu
error 20 at 0 depth lookup:unable to get local issuer certificate

Hmm, no luck. But that is because the CA file that comes with Linux by default is missing some of the intermediates. Those either have to be in the CA store, or the server has to deliver the whole chain to us when we initially connect. Ok, not a problem – let’s continue manually.
First we can see who the issuer really here is, and what are the certificate’s parameters:

$ openssl x509 -in isc.sans.edu.pem -noout -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            24:21:68:a7:55:13:74:1a:d1:95:fb:62:26:90:c9:1d
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Organization Validation Secure Server CA
        Validity
            Not Before: Apr  7 00:00:00 2015 GMT
            Not After : Apr  6 23:59:59 2018 GMT
        Subject: C=US/postalCode=20814, ST=Maryland, L=Bethesda/street=Suite 205/street=8120 Woodmont Ave, O=The SANS Institute, OU=Network Operations Center (NOC), OU=Unified Communications, CN=isc.sans.edu

Ok, so the certificate is valid, and it is signed by Comodo, as you can see in the highlighted line. The part that matters to the browsers is actually only the CN component. In the Subject field we can see that the CN matches our site (isc.sans.edu) and in the Issuer field we can see that that signing CA (which is an intermediate CA) is called COMODO RSA Organization Validation Secure Server CA.

We can verify this information in the RFC2253 format as well, for both the subject and issuer; this will be easier to read:

$ openssl x509 -in isc.sans.edu.pem -noout -subject -issuer -nameopt RFC2253
subject= CN=isc.sans.edu,OU=Unified Communications,OU=Network Operations Center (NOC),O=The SANS Institute,street=8120 Woodmont Ave,street=Suite 205,L=Bethesda,ST=Maryland,postalCode=20814,C=US
issuer= CN=COMODO RSA Organization Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB

So, let’s first try getting the CAcert file that is used by Mozilla. This might help us verify everything. That being said, getting the CAcert file from Mozilla is not all that trivial and some extractions/conversions should be done. Luckily, the good folks at curl already publish the cacert file in the PEM format so we can get it from their web; it’s available at http://ift.tt/1SFao8F

$ curl http://ift.tt/1OeNMyn -o cacert.pem

The file even contains name of CA’s in plain text. Let’s search for Comodo:

$ grep -i Comodo cacert.pem
Comodo AAA Services root
Comodo Secure Services root
Comodo Trusted Services root
COMODO Certification Authority
COMODO ECC Certification Authority
COMODO RSA Certification Authority

It doesn’t have the one that we need: remember that it must match precisely to the CN field! This also confirms that it is an intermediate CA. We will probably have to find the intermediate CA’s certificate on Comodo’s web site. Let’s paste the name into Google (“COMODO RSA Organization Validation Secure Server CA“) and see what we get.

The first hit will lead us to http://ift.tt/2a1mrms and sure - this is were our intermediate CA is. Let’s download it:

$ curl 'http://ift.tt/2aLZHnG' > comodo.crt

Now let’s check the issuer and subject here as well:

$ openssl x509 -in comodo.crt -subject -issuer -noout -nameopt RFC2253
subject= CN=COMODO RSA Organization Validation Secure Server CA,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB
issuer= CN=COMODO RSA Certification Authority,O=COMODO CA Limited,L=Salford,ST=Greater Manchester,C=GB

Great! That’s exactly what we need – see that the Subject field (the CN component) matches exactly to the signer of our certificate. We are lucky even with the issuer:

$ grep "COMODO RSA Certification Authority" cacert.pem
COMODO RSA Certification Authority

It is a root CA, that exists in Mozilla cacert.pem – so we have the full chain!
Let’s get back to verifying our certificate from isc.sans.edu. First we need to check which signature algorithm has been used:

$ openssl x509 -in isc.sans.edu.pem -noout -text | grep Signature
    Signature Algorithm: sha256WithRSAEncryption

Ok, SHA256 with RSA (great job Johannes on renewing the cert properly :)). What does this mean? This means that the critical parts of the certificate have been hashed by the CA with the SHA256 hashing algorithm and then encrypted with CA’s private key. It’s public key is available in the comodo.crt file we just downloaded (and isc.sans.edu’s public key is in the certificate we got from the web site). Openssl can confirm that as well for us:

$ openssl x509 -in isc.sans.edu.pem -noout -text

        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:d4:8f:58:63:f4:30:0b:ad:05:d0:37:f1:69:97:
                    6e:27:90:a5:dd:43:d7:c5:30:0d:dc:73:80:6a:fc:

What we need to do know is the following:

  • We need to extract the signature from the certificate and then use Comodo’s public key to decrypt it; with this we will get the SHA256 hash of the certificate
  • Then we need to calculate our own SHA256 hash of the certificate
  • If those two match: the certificate is signed properly

In order to extract components of a certificate we need to decode it to ASN.1 format. Luckily, openssl can do that for us, so let’s see what we get on isc.sans.edu’s certificate:

$ openssl asn1parse -in isc.sans.edu.pem

1582:d=1  hl=2 l=  13 cons: SEQUENCE
 1584:d=2  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption
 1595:d=2  hl=2 l=   0 prim: NULL
 1597:d=1  hl=4 l= 257 prim: BIT STRING

So, the last object is actually the signature – it starts at offset 1597, so let’s extract it with openssl:

$ openssl asn1parse -in isc.sans.edu.pem -out isc.sans.edu.sig -noout -strparse 1597

Now we got the file isc.sans.edu.sig, which is RSA encrypted SHA256 of the signature. How do we decrypt it? We need Comodo’s public key, which is available in its certificate, so let’s extract it:

$ openssl x509 -in comodo.crt -pubkey -noout > comodo.pub

Now that we have the Comodo’s public key, we can finally decrypt the SHA256 hash. It will work only if the original has been encrypted with the corresponding private key. We’ll get an ASN.1 structure back so let’s show it properly as well on the screen:

$ openssl rsautl -verify -pubin -inkey comodo.pub -in isc.sans.edu.sig -asn1parse
    0:d=0  hl=2 l=  49 cons: SEQUENCE
    2:d=1  hl=2 l=  13 cons:  SEQUENCE
    4:d=2  hl=2 l=   9 prim:   OBJECT            :sha256
   15:d=2  hl=2 l=   0 prim:   NULL
   17:d=1  hl=2 l=  32 prim:  OCTET STRING
      0000 - 4b ca b8 23 4d 52 da e1-31 f1 0d b0 ba 3d 33 6b   K..#MR..1....=3k
      0010 - 0e 3d 68 0f 99 cb 35 43-69 ff 70 d0 1d a6 ef c1   .=h...5Ci.p.....

Yay, it worked. So it has been encrypted properly. The highlighted part is actually the SHA256 hash.
The last step now is to extract the critical parts of the certificate and verify if both hashes match. So what are the critical parts of the certificate? The X509 standard defines it as a so called TBSCertificate (To Be Signed Certificate), and it is the first object in the certificate:

$ openssl asn1parse -in isc.sans.edu.pem 
    0:d=0  hl=4 l=1854 cons: SEQUENCE
    4:d=1  hl=4 l=1574 cons: SEQUENCE
    8:d=2  hl=2 l=   3 cons: cont [ 0 ]
   10:d=3  hl=2 l=   1 prim: INTEGER           :02
   13:d=2  hl=2 l=  16 prim: INTEGER           :242168A75513741AD195FB622690C91D
   31:d=2  hl=2 l=  13 cons: SEQUENCE
   33:d=3  hl=2 l=   9 prim: OBJECT            :sha256WithRSAEncryption

Ok, the first object starts at offset 4, let’s extract it the same way as before:

$ openssl asn1parse -in isc.sans.edu.pem -out tbsCertificate -strparse 4

The file tbsCertificate contains what we need to run SHA256 has over. We can again use openssl for that:

$ openssl dgst -sha256 -hex tbsCertificate
SHA256(tbsCertificate)= 4bcab8234d52dae131f10db0ba3d336b0e3d680f99cb354369ff70d01da6efc1

Remember the decrypted ASN.1 object? Scroll up – or let me paste it here one more time (this diary is already longer than I thought really):


   17:d=1  hl=2 l=  32 prim:  OCTET STRING
      0000 - 4b ca b8 23 4d 52 da e1-31 f1 0d b0 ba 3d 33 6b   K..#MR..1....=3k
      0010 - 0e 3d 68 0f 99 cb 35 43-69 ff 70 d0 1d a6 ef c1   .=h...5Ci.p.....

Yay! It’s a full, 100% match. So the certificate is correctly signed by Comodo’s intermediate. We could now repeat all the steps to verify if the intermediate CA is correctly signed by the root CA that we got from Mozilla’s cacert.pem, but we can also have openssl do that for us, we just need to tell it with CA file to use:

$ cat comodo.crt >> cacert.pem
$ openssl verify -verbose -CAfile cacert.pem isc.sans.edu.pem
isc.sans.edu.pem: OK

And that get’s us to the end of verification.

-- Bojan
https://twitter.com/bojanz
INFIGO IS

Feedly:SANS Internet Storm Center, InfoCON: green. ISC Stormcast For Thursday, July 28th 2016 http://ift.tt/2ao9vqx, (Thu, Jul 28th)



from SANS Internet Storm Center, InfoCON: green

...

Feedly:Errata Security. NYTimes vs. DNCleaks



from Errata Security

People keep citing

this New York Times article

by David Sanger that attributes the DNCleaks to Russia.

As I've written before

, this is

propaganda

, not journalism. It's against basic journalistic ethics to quote anonymous "federal officials" in a story like this. The Society of Professional Journalists repudiate this

[1] [2]

. The NYTime's own ombudsman has itself

criticized

David Sanger for this practce.

Quoting anonymous federal officials is great, when they

disagree

with government, when revealing government malfeasance, when it's something that people will get fired over.

But the opposite is happening here. It's either Obama himself or some faction within the administration that wants us to believe Russia is involved. They want us to believe the propaganda, then hide behind anonymity so we can't question them. This evades obvious questions, like whether all their information comes from the same public sources that already point to Russia, or whether they have their own information from the CIA or NSA that points to Russia.

Everyone knows the Washington press works this way, and that David Sanger in particular is a journalistic whore. The NetFlix series

House of Cards

portrays this accurately in its first season, only "Zoe Barnes" is "David Sanger". In exchange for exclusive access to stories, the politician gets to plant propaganda when it suits his purpose.

All this NYTimes article by Sanger tells us is that some faction within the administration wants us to believe this, not whether it's true. That's not surprising. There are lots of war hawks that would want us to believe this. There are also lots who support Hillary over Trump -- who want us to believe that electing Trump plays into Putin's hands. Of course David Sanger would write such a story quoting anonymous sources, like he does after

every

such incident. You can pretty much write the story yourself.

Thus, we should fully discount Sanger's story. If government officials are willing to come forward an be named, and be held accountable for the information, then we should place more faith in them. As long as a faithless journalists protects them with anonymity, we shouldn't believe anything they say.

Feedly:Darknet – The Darkside. In 2016 Your Wireless Keyboard Security Still SUCKS – KeySniffer

Feedly:We Live Security. Cyberattacks affect ‘nearly every single company’



from We Live Security

Around eight in every 10 cybersecurity executives admit their company has been compromised by a cyberattack in the past 24 months.

The post Cyberattacks affect ‘nearly every single company’ appeared first on We Live Security.

Web Analytics