Skype finally getting end-to-end encryption | Ars Technica

Since its inception, Skype has been notable for its secretive, proprietary algorithm. It’s also long had a complicated relationship with encryption: encryption is used by the Skype protocol, but the service has never been clear exactly how that encryption was implemented or exactly which privacy and security features it offers.

That changes today in a big way. The newest Skype preview now supports the Signal protocol: the end-to-end encrypted protocol already used by WhatsApp, Facebook Messenger, Google Allo, and, of course, Signal. Skype Private Conversations will support text, audio calls, and file transfers, with end-to-end encryption that Microsoft, Signal, and, it’s believed, law enforcement agencies cannot eavesdrop on.

Presently, Private Conversations are only available in the Insider builds of Skype. Naturally, the Universal Windows Platform version of the appthe preferred version on Windows 10isn’t yet supported. In contrast, the desktop version of the app, along with the iOS, Android, Linux, and macOS clients, all have compatible Insider builds. Private Conversations aren’t the default and don’t appear to yet support video calling. The latter limitation shouldn’t be insurmountable (Signal’s own app offers secure video calling). We hope to see the former change once updated clients are stable and widely deployed.

We’ve criticized Skype’s failure to provide this kind of security in the past. Skype still has valuable features, such as its interoperability with traditional phone networks and additional tools for TV and radio broadcasters. But its tardiness at adopting this kind of technology left Skype behind its peers. The adoption of end-to-end security is very welcome, and the decision to do so using the Signal protocol, rather than yet another proprietary Skype protocol, marks a change from the product’s history.

Although Skype remains widely used, mobile-oriented upstarts like WhatsApp and Facebook Messenger rapidly surpassed it. Becoming secure and trustworthy is a necessary development, but whether or not it’s going to be sufficient to reinvigorate the application is far from clear.

Read more here:
Skype finally getting end-to-end encryption | Ars Technica

FBI chief says phone encryption is a ‘major public safety issue’

Wray urged the private sector to work with the government in finding “a way forward quickly,” insisting that the FBI isn’t interested in peeking into ordinary citizens’ devices. The bureau just wants access to the ones owned by suspects. That pretty much echoes Comey’s position during his time — if you’ll recall the FBI asked tech titans to create a backdoor into their software and phones in order to give authorities a way to open them during investigations. Apple chief Tim Cook said the request had “chilling” and “dangerous” implications, warning that companies wouldn’t be able to control how that backdoor is used.

Wray told the audience at the event that authorities face an increasing number of cases that rely on electronic evidence. He doesn’t buy companies claims that it’s impossible to find a way for encryption to be more law enforcement-friendly, so to speak. Not that the FBI can’t do anything if it absolutely has to: when Apple refused to cooperate with authorities to unlock the San Bernardino shooter’s iPhone, the agency paid a third party almost a million to get the job done.

Read more:
FBI chief says phone encryption is a ‘major public safety issue’

Ransomware – Wikipedia

Ransomware is a type of malicious software from cryptovirology that threatens to publish the victim’s data or perpetually block access to it unless a ransom is paid. While some simple ransomware may lock the system in a way which is not difficult for a knowledgeable person to reverse, more advanced malware uses a technique called cryptoviral extortion, in which it encrypts the victim’s files, making them inaccessible, and demands a ransom payment to decrypt them.[1][2][3][4] In a properly implemented cryptoviral extortion attack, recovering the files without the decryption key is an intractable problem and difficult to trace digital currencies such as Ukash and Bitcoin are used for the ransoms, making tracing and prosecuting the perpetrators difficult.

Ransomware attacks are typically carried out using a Trojan that is disguised as a legitimate file that the user is tricked into downloading or opening when it arrives as an email attachment. However, one high-profile example, the “WannaCry worm”, traveled automatically between computers without user interaction.

Starting from around 2012 the use of ransomware scams has grown internationally.[5][6][7] in June 2013, vendor McAfee released data showing that it had collected more than double the number of samples of ransomware that quarter than it had in the same quarter of the previous year.[8]CryptoLocker was particularly successful, procuring an estimated US $3 million before it was taken down by authorities,[9] and CryptoWall was estimated by the US Federal Bureau of Investigation (FBI) to have accrued over US $18m by June 2015.[10]

The concept of file encrypting ransomware was invented and implemented by Young and Yung at Columbia University and was presented at the 1996 IEEE Security & Privacy conference. It is called cryptoviral extortion and it was inspired by the fictional facehugger in the movie Alien.[11] Cryptoviral extortion is the following three-round protocol carried out between the attacker and the victim.[1]

The symmetric key is randomly generated and will not assist other victims. At no point is the attacker’s private key exposed to victims and the victim need only send a very small ciphertext (the encrypted symmetric-cipher key) to the attacker.

Ransomware attacks are typically carried out using a Trojan, entering a system through, for example, a downloaded file or a vulnerability in a network service. The program then runs a payload, which locks the system in some fashion, or claims to lock the system but does not (e.g., a scareware program). Payloads may display a fake warning purportedly by an entity such as a law enforcement agency, falsely claiming that the system has been used for illegal activities, contains content such as pornography and “pirated” media.[12][13][14]

Some payloads consist simply of an application designed to lock or restrict the system until payment is made, typically by setting the Windows Shell to itself,[15] or even modifying the master boot record and/or partition table to prevent the operating system from booting until it is repaired.[16] The most sophisticated payloads encrypt files, with many using strong encryption to encrypt the victim’s files in such a way that only the malware author has the needed decryption key.[1][17][18]

Payment is virtually always the goal, and the victim is coerced into paying for the ransomware to be removedwhich may or may not actually occureither by supplying a program that can decrypt the files, or by sending an unlock code that undoes the payload’s changes. A key element in making ransomware work for the attacker is a convenient payment system that is hard to trace. A range of such payment methods have been used, including wire transfers, premium-rate text messages,[19] pre-paid voucher services such as Paysafecard,[5][20][21] and the digital currency Bitcoin.[22][23][24] A 2016 survey commissioned by Citrix claimed that larger businesses are holding bitcoin as contingency plans.[25]

The first known malware extortion attack, the “AIDS Trojan” written by Joseph Popp in 1989, had a design failure so severe it was not necessary to pay the extortionist at all. Its payload hid the files on the hard drive and encrypted only their names, and displayed a message claiming that the user’s license to use a certain piece of software had expired. The user was asked to pay US$189 to “PC Cyborg Corporation” in order to obtain a repair tool even though the decryption key could be extracted from the code of the Trojan. The Trojan was also known as “PC Cyborg”. Popp was declared mentally unfit to stand trial for his actions, but he promised to donate the profits from the malware to fund AIDS research.[26]

The idea of abusing anonymous cash systems to safely collect ransom from human kidnapping was introduced in 1992 by Sebastiaan von Solms and David Naccache.[27] This money collection method is a key feature of ransomware. In the von Solms-Naccache scenario a newspaper publication was used (since bitcoin ledgers did not exist at the time the paper was written).

The notion of using public key cryptography for data kidnapping attacks was introduced in 1996 by Adam L. Young and Moti Yung. Young and Yung critiqued the failed AIDS Information Trojan that relied on symmetric cryptography alone, the fatal flaw being that the decryption key could be extracted from the Trojan, and implemented an experimental proof-of-concept cryptovirus on a Macintosh SE/30 that used RSA and the Tiny Encryption Algorithm (TEA) to hybrid encrypt the victim’s data. Since public key crypto is used, the cryptovirus only contains the encryption key. The attacker keeps the corresponding private decryption key private. Young and Yung’s original experimental cryptovirus had the victim send the asymmetric ciphertext to the attacker who deciphers it and returns the symmetric decryption key it contains to the victim for a fee. Long before electronic money existed Young and Yung proposed that electronic money could be extorted through encryption as well, stating that “the virus writer can effectively hold all of the money ransom until half of it is given to him. Even if the e-money was previously encrypted by the user, it is of no use to the user if it gets encrypted by a cryptovirus”.[1] They referred to these attacks as being “cryptoviral extortion”, an overt attack that is part of a larger class of attacks in a field called cryptovirology, which encompasses both overt and covert attacks.[1] The cryptoviral extortion protocol was inspired by the forced-symbiotic relationship between H. R. Giger’s facehugger and its host in the movie Alien.[1][11]

Examples of extortionate ransomware became prominent in May 2005.[28] By mid-2006, Trojans such as Gpcode, TROJ.RANSOM.A, Archiveus, Krotten, Cryzip, and MayArchive began utilizing more sophisticated RSA encryption schemes, with ever-increasing key-sizes. Gpcode.AG, which was detected in June 2006, was encrypted with a 660-bit RSA public key.[29] In June 2008, a variant known as Gpcode.AK was detected. Using a 1024-bit RSA key, it was believed large enough to be computationally infeasible to break without a concerted distributed effort.[30][31][32][33]

Encrypting ransomware returned to prominence in late 2013 with the propagation of CryptoLockerusing the Bitcoin digital currency platform to collect ransom money. In December 2013, ZDNet estimated based on Bitcoin transaction information that between 15 October and 18 December, the operators of CryptoLocker had procured about US$27 million from infected users.[34] The CryptoLocker technique was widely copied in the months following, including CryptoLocker 2.0 (though not to be related to CryptoLocker), CryptoDefense (which initially contained a major design flaw that stored the private key on the infected system in a user-retrievable location, due to its use of Windows’ built-in encryption APIs),[23][35][36][37] and the August 2014 discovery of a Trojan specifically targeting network-attached storage devices produced by Synology.[38] In January 2015, it was reported that ransomware-styled attacks have occurred against individual websites via hacking, and through ransomware designed to target Linux-based web servers.[39][40][41]

The Microsoft Malware Protection Center identified a trend away from WSF files in favor of LNK files and PowerShell scripting.[42] These LNK shortcut files install Locky ransomware by automating infection operations rather than relying on traditional user downloads of WSF filesall of which is made possible by the universal PowerShell Windows application. Unfortunately, cyber criminals have been able to leverage PowerShell for their attacks for years. In a recent report, the application was found to be involved in nearly 40% of endpoint security incidents.[43] While attackers have been finding weaknesses in the Windows operating system for years, its clear that theres something problematic with PowerShell scripting.[44]

Some ransomware strains have used proxies tied to Tor hidden services to connect to their command and control servers, increasing the difficulty of tracing the exact location of the criminals.[45][46] Furthermore, dark web vendors have increasingly started to offer the technology as a service.[46][47][48]

Symantec has classified ransomware to be the most dangerous cyber threat.[49]

In August 2010, Russian authorities arrested nine individuals connected to a ransomware Trojan known as WinLock. Unlike the previous Gpcode Trojan, WinLock did not use encryption. Instead, WinLock trivially restricted access to the system by displaying pornographic images, and asked users to send a premium-rate SMS (costing around US$10) to receive a code that could be used to unlock their machines. The scam hit numerous users across Russia and neighboring countriesreportedly earning the group over US$16 million.[14][50]

In 2011, a ransomware Trojan surfaced that imitated the Windows Product Activation notice, and informed users that a system’s Windows installation had to be re-activated due to “[being a] victim of fraud”. An online activation option was offered (like the actual Windows activation process), but was unavailable, requiring the user to call one of six international numbers to input a 6-digit code. While the malware claimed that this call would be free, it was routed through a rogue operator in a country with high international phone rates, who placed the call on hold, causing the user to incur large international long distance charges.[12]

In February 2013, a ransomware Trojan based on the Stamp.EK exploit kit surfaced; the malware was distributed via sites hosted on the project hosting services SourceForge and GitHub that claimed to offer “fake nude pics” of celebrities.[51] In July 2013, an OS X-specific ransomware Trojan surfaced, which displays a web page that accuses the user of downloading pornography. Unlike its Windows-based counterparts, it does not block the entire computer, but simply exploits the behavior of the web browser itself to frustrate attempts to close the page through normal means.[52]

In July 2013, a 21-year-old man from Virginia, whose computer coincidentally did contain pornographic photographs of underaged girls with whom he had conducted sexualized communications, turned himself in to police after receiving and being deceived by ransomware purporting to be an FBI message accusing him of possessing child pornography. An investigation discovered the incriminating files, and the man was charged with child sexual abuse and possession of child pornography.[53]

The converse of ransomware is a cryptovirology attack invented by Adam L. Young that threatens to publish stolen information from the victim’s computer system rather than deny the victim access to it.[54] In a leakware attack, malware exfiltrates sensitive host data either to the attacker or alternatively, to remote instances of the malware, and the attacker threatens to publish the victim’s data unless a ransom is paid. The attack was presented at West Point in 2003 and was summarized in the book Malicious Cryptography as follows, “The attack differs from the extortion attack in the following way. In the extortion attack, the victim is denied access to its own valuable information and has to pay to get it back, where in the attack that is presented here the victim retains access to the information but its disclosure is at the discretion of the computer virus”.[55] The attack is rooted in game theory and was originally dubbed “non-zero sum games and survivable malware”. The attack can yield monetary gain in cases where the malware acquires access to information that may damage the victim user or organization, e.g., reputational damage that could result from publishing proof that the attack itself was a success.

With the increased popularity of ransomware on PC platforms, ransomware targeting mobile operating systems has also proliferated. Typically, mobile ransomware payloads are blockers, as there is little incentive to encrypt data since it can be easily restored via online synchronization.[56] Mobile ransomware typically targets the Android platform, as it allows applications to be installed from third-party sources.[56][57] The payload is typically distributed as an APK file installed by an unsuspecting user; it may attempt to display a blocking message over top of all other applications,[57] while another used a form of clickjacking to cause the user to give it “device administrator” privileges to achieve deeper access to the system.[58]

Different tactics have been used on iOS devices, such as exploiting iCloud accounts and using the Find My iPhone system to lock access to the device.[59] On iOS 10.3, Apple patched a bug in the handling of JavaScript pop-up windows in Safari that had been exploited by ransomware websites.[60]

In 2012, a major ransomware Trojan known as Reveton began to spread. Based on the Citadel Trojan (which itself, is based on the Zeus Trojan), its payload displays a warning purportedly from a law enforcement agency claiming that the computer has been used for illegal activities, such as downloading unlicensed software or child pornography. Due to this behaviour, it is commonly referred to as the “Police Trojan”.[61][62][63] The warning informs the user that to unlock their system, they would have to pay a fine using a voucher from an anonymous prepaid cash service such as Ukash or Paysafecard. To increase the illusion that the computer is being tracked by law enforcement, the screen also displays the computer’s IP address, while some versions display footage from a victim’s webcam to give the illusion that the user is being recorded.[5][64]

Reveton initially began spreading in various European countries in early 2012.[5] Variants were localized with templates branded with the logos of different law enforcement organizations based on the user’s country; for example, variants used in the United Kingdom contained the branding of organizations such as the Metropolitan Police Service and the Police National E-Crime Unit. Another version contained the logo of the royalty collection society PRS for Music, which specifically accused the user of illegally downloading music.[65] In a statement warning the public about the malware, the Metropolitan Police clarified that they would never lock a computer in such a way as part of an investigation.[5][13]

In May 2012, Trend Micro threat researchers discovered templates for variations for the United States and Canada, suggesting that its authors may have been planning to target users in North America.[66] By August 2012, a new variant of Reveton began to spread in the United States, claiming to require the payment of a $200 fine to the FBI using a MoneyPak card.[6][7][64] In February 2013, a Russian citizen was arrested in Dubai by Spanish authorities for his connection to a crime ring that had been using Reveton; ten other individuals were arrested on money laundering charges.[67] In August 2014, Avast Software reported that it had found new variants of Reveton that also distribute password stealing malware as part of its payload.[68]

Encrypting ransomware reappeared in September 2013 with a Trojan known as CryptoLocker, which generated a 2048-bit RSA key pair and uploaded in turn to a command-and-control server, and used to encrypt files using a whitelist of specific file extensions. The malware threatened to delete the private key if a payment of Bitcoin or a pre-paid cash voucher was not made within 3 days of the infection. Due to the extremely large key size it uses, analysts and those affected by the Trojan considered CryptoLocker extremely difficult to repair.[22][69][70][71] Even after the deadline passed, the private key could still be obtained using an online tool, but the price would increase to 10 BTCwhich cost approximately US$2300 as of November 2013.[72][73]

CryptoLocker was isolated by the seizure of the Gameover ZeuS botnet as part of Operation Tovar, as officially announced by the U.S. Department of Justice on 2 June 2014. The Department of Justice also publicly issued an indictment against the Russian hacker Evgeniy Bogachev for his alleged involvement in the botnet.[74][75] It was estimated that at least US$3 million was extorted with the malware before the shutdown.[9]

In September 2014, a wave of ransomware Trojans surfaced that first targeted users in Australia, under the names CryptoWall and CryptoLocker (which is, as with CryptoLocker 2.0, unrelated to the original CryptoLocker). The Trojans spread via fraudulent e-mails claiming to be failed parcel delivery notices from Australia Post; to evade detection by automatic e-mail scanners that follow all links on a page to scan for malware, this variant was designed to require users to visit a web page and enter a CAPTCHA code before the payload is actually downloaded, preventing such automated processes from being able to scan the payload. Symantec determined that these new variants, which it identified as CryptoLocker.F, were again, unrelated to the original CryptoLocker due to differences in their operation.[76][77] A notable victim of the Trojans was the Australian Broadcasting Corporation; live programming on its television news channel ABC News 24 was disrupted for half an hour and shifted to Melbourne studios due to a CryptoWall infection on computers at its Sydney studio.[78][79][80]

Another Trojan in this wave, TorrentLocker, initially contained a design flaw comparable to CryptoDefense; it used the same keystream for every infected computer, making the encryption trivial to overcome. However, this flaw was later fixed.[35] By late-November 2014, it was estimated that over 9,000 users had been infected by TorrentLocker in Australia alone, trailing only Turkey with 11,700 infections.[81]

Another major ransomware Trojan targeting Windows, CryptoWall, first appeared in 2014. One strain of CryptoWall was distributed as part of a malvertising campaign on the Zedo ad network in late-September 2014 that targeted several major websites; the ads redirected to rogue websites that used browser plugin exploits to download the payload. A Barracuda Networks researcher also noted that the payload was signed with a digital signature in an effort to appear trustworthy to security software.[82] CryptoWall 3.0 used a payload written in JavaScript as part of an email attachment, which downloads executables disguised as JPG images. To further evade detection, the malware creates new instances of explorer.exe and svchost.exe to communicate with its servers. When encrypting files, the malware also deletes volume shadow copies, and installs spyware that steals passwords and Bitcoin wallets.[83]

The FBI reported in June 2015 that nearly 1,000 victims had contacted the bureau’s Internet Crime Complaint Center to report CryptoWall infections, and estimated losses of at least $18 million.[10]

The most recent version, CryptoWall 4.0, enhanced its code to avoid antivirus detection, and encrypts not only the data in files but also the file names.[84]

Fusob is one of the major mobile ransomware families. Between April 2015 and March 2016, about 56 percent of accounted mobile ransomware was Fusob.[85]

Like a typical mobile ransomware, it employs scare tactics to extort people to pay a ransom.[86] The program pretends to be an accusatory authority, demanding the victim to pay a fine from $100 to $200 USD or otherwise face a fictitious charge. Rather surprisingly, Fusob suggests using iTunes gift cards for payment. Also, a timer clicking down on the screen adds to the users anxiety as well.

In order to infect devices, Fusob masquerades as a pornographic video player. Thus, victims, thinking it is harmless, unwittingly download Fusob.[87]

When Fusob is installed, it first checks the language used in the device. If it uses Russian or certain Eastern European languages, Fusob does nothing. Otherwise, it proceeds on to lock the device and demand ransom. Among victims, about 40% of them are in Germany with the United Kingdom and the United States following with 14.5% and 11.4% respectively.

Fusob has lots in common with Small, which is another major family of mobile ransomware. They represented over 93% of mobile ransomwares between 2015 and 2016.

In May 2017, the WannaCry ransomware attack spread through the Internet, using an exploit vector named EternalBlue, which was leaked from the U.S. National Security Agency. The ransomware attack, unprecedented in scale,[88] infected more than 230,000 computers in over 150 countries,[89] using 20 different languages to demand money from users using Bitcoin cryptocurrency. WannaCrypt demanded US$300 per computer.[90] The attack affected Telefnica and several other large companies in Spain, as well as parts of the British National Health Service (NHS), where at least 16 hospitals had to turn away patients or cancel scheduled operations,[91]FedEx, Deutsche Bahn, Honda,[92]Renault, as well as the Russian Interior Ministry and Russian telecom MegaFon.[93] The attackers gave their victims a 7-day deadline from the day their computers got infected, after which the encrypted files would be deleted.[94]

Petya was first discovered in March 2016; unlike other forms of encrypting ransomware, the malware aimed to infect the master boot record, installing a payload which encrypts the file tables of the NTFS file system the next time that the infected system boots, blocking the system from booting into Windows at all until the ransom is paid. Check Point reported that despite what it believed to be an innovative evolution in ransomware design, it had resulted in relatively-fewer infections than other ransomware active around the same time frame.[95]

On June 27, 2017, a heavily modified version of Petya was used for a global cyberattack primarily targeting Ukraine. This version had been modified to propagate using the same EternalBlue exploit that was used by WannaCry. Due to another design change, it is also unable to actually unlock a system after the ransom is paid; this led to security analysts speculating that the attack was not meant to generate illicit profit, but to simply cause disruption.[96][97]

On October 24, 2017, some users in Russia and Ukraine reported a new ransomware attack, named “Bad Rabbit”, which follows a similar pattern to WannaCry and Petya by encrypting the user’s file tables and then demands a BitCoin payment to decrypt them. ESET believed the ransomware to have been distributed by a bogus update to Adobe Flash software.[98] Among agencies that were affected by the ransomware included Interfax, Odessa International Airport, Kiev Metro, and the Ministry of Infrastructure of Ukraine.[99] As it used corporate network structures to spread, the ransomware was also discovered in other countries, including Turkey, Germany, Poland, Japan, South Korea, and the United States.[100] Experts believed the ransomware attack was tied to the Petya attack in the Ukraine, though the only identity to the culprits are the names of characters from the Game of Thrones series embedded within the code.[100]

Security experts found that the ransomware did not use the EternalBlue exploit to spread, and a simple method to vaccinate an unaffected machine running older Windows versions was found by October 24, 2017.[101][102] Further, the sites that had been used to spread the bogus Flash updating have gone offline or removed the problematic files within a few days of its discovery, effectively killing off the spread of Bad Rabbit.[100]

As with other forms of malware, security software (antivirus software) might not detect a ransomware payload, or, especially in the case of encrypting payloads, only after encryption is under way or complete, particularly if a new version unknown to the protective software is distributed.[103] If an attack is suspected or detected in its early stages, it takes some time for encryption to take place; immediate removal of the malware (a relatively simple process) before it has completed would stop further damage to data, without salvaging any already lost.[104][105]

Security experts have suggested precautionary measures for dealing with ransomware. Using software or other security policies to block known payloads from launching will help to prevent infection, but will not protect against all attacks[22][106] Keeping “offline” backups of data stored in locations inaccessible from any potentially infected computer, such as external storage drives or devices that do not have any access to any network (including the Internet), prevents them from being accessed by the ransomware. Installing security updates issued by software vendors can mitigate the vulnerabilities leveraged by certain strains to propagate.[107][108][109][110][111] Other measures include cyber hygiene exercising caution when opening e-mail attachments and links, network segmentation, and keeping critical computers isolated from networks.[112][113] Furthermore, to mitigate the spread of ransomware measures of infection control can be applied.[114] Such may include disconnecting infected machines from all networks, educational programs,[115] effective communication channels, malware surveillance[original research?] and ways of collective participation[114]

There are a number of tools intended specifically to decrypt files locked by ransomware, although successful recovery may not be possible.[2][116] If the same encryption key is used for all files, decryption tools use files for which there are both uncorrupted backups and encrypted copies (a known-plaintext attack in the jargon of cryptanalysis); recovery of the key, if it is possible, may take several days.[117] Free ransomware decryption tools can help decrypt files encrypted by the following forms of ransomware: AES_NI, Alcatraz Locker, Apocalypse, BadBlock, Bart, BTCWare, Crypt888, CryptoMix, CrySiS, EncrypTile, FindZip, Globe, Hidden Tear, Jigsaw, LambdaLocker, Legion, NoobCrypt, Stampado, SZFLocker, TeslaCrypt, XData.[118]

The publication of proof-of-concept attack code is common among academic researchers and vulnerability researchers. It teaches the nature of the threat, conveys the gravity of the issues, and enables countermeasures to be devised and put into place. However, lawmakers with the support of law-enforcement bodies are contemplating making the creation of ransomware illegal. In the state of Maryland the original draft of HB 340 made it a felony to create ransomware, punishable by up to 10 years in prison.[119] However, this provision was removed from the final version of the bill.[120] A minor in Japan was arrested for creating and distributing ransomware code.[121] Young and Yung have had the ANSI C source code to a ransomware cryptotrojan on-line, at cryptovirology.com, since 2005 as part of a cryptovirology book being written. The source code to the cryptotrojan is still live on the Internet and is associated with a draft of Chapter 2.[122]

Read this article:
Ransomware – Wikipedia

security – Fundamental difference between Hashing and …

Well, you could look it up in Wikipedia… But since you want an explanation, I’ll do my best here:

They provide a mapping between an arbitrary length input, and a (usually) fixed length (or smaller length) output. It can be anything from a simple crc32, to a full blown cryptographic hash function such as MD5 or SHA1/2/256/512. The point is that there’s a one-way mapping going on. It’s always a many:1 mapping (meaning there will always be collisions) since every function produces a smaller output than it’s capable of inputting (If you feed every possible 1mb file into MD5, you’ll get a ton of collisions).

The reason they are hard (or impossible in practicality) to reverse is because of how they work internally. Most cryptographic hash functions iterate over the input set many times to produce the output. So if we look at each fixed length chunk of input (which is algorithm dependent), the hash function will call that the current state. It will then iterate over the state and change it to a new one and use that as feedback into itself (MD5 does this 64 times for each 512bit chunk of data). It then somehow combines the resultant states from all these iterations back together to form the resultant hash.

Now, if you wanted to decode the hash, you’d first need to figure out how to split the given hash into its iterated states (1 possibility for inputs smaller than the size of a chunk of data, many for larger inputs). Then you’d need to reverse the iteration for each state. Now, to explain why this is VERY hard, imagine trying to deduce a and b from the following formula: 10 = a + b. There are 10 positive combinations of a and b that can work. Now loop over that a bunch of times: tmp = a + b; a = b; b = tmp. For 64 iterations, you’d have over 10^64 possibilities to try. And that’s just a simple addition where some state is preserved from iteration to iteration. Real hash functions do a lot more than 1 operation (MD5 does about 15 operations on 4 state variables). And since the next iteration depends on the state of the previous and the previous is destroyed in creating the current state, it’s all but impossible to determine the input state that led to a given output state (for each iteration no less). Combine that, with the large number of possibilities involved, and decoding even an MD5 will take a near infinite (but not infinite) amount of resources. So many resources that it’s actually significantly cheaper to brute-force the hash if you have an idea of the size of the input (for smaller inputs) than it is to even try to decode the hash.

They provide a 1:1 mapping between an arbitrary length input and output. And they are always reversible. The important thing to note is that it’s reversible using some method. And it’s always 1:1 for a given key. Now, there are multiple input:key pairs that might generate the same output (in fact there usually are, depending on the encryption function). Good encrypted data is indistinguishable from random noise. This is different from a good hash output which is always of a consistent format.

Use a hash function when you want to compare a value but can’t store the plain representation (for any number of reasons). Passwords should fit this use-case very well since you don’t want to store them plain-text for security reasons (and shouldn’t). But what if you wanted to check a filesystem for pirated music files? It would be impractical to store 3 mb per music file. So instead, take the hash of the file, and store that (md5 would store 16 bytes instead of 3mb). That way, you just hash each file and compare to the stored database of hashes (This doesn’t work as well in practice because of re-encoding, changing file headers, etc, but it’s an example use-case).

Use a hash function when you’re checking validity of input data. That’s what they are designed for. If you have 2 pieces of input, and want to check to see if they are the same, run both through a hash function. The probability of a collision is astronomically low for small input sizes (assuming a good hash function). That’s why it’s recommended for passwords. For passwords up to 32 characters, md5 has 4 times the output space. SHA1 has 6 times the output space (approximately). SHA512 has about 16 times the output space. You don’t really care what the password was, you care if it’s the same as the one that was stored. That’s why you should use hashes for passwords.

Use encryption whenever you need to get the input data back out. Notice the word need. If you’re storing credit card numbers, you need to get them back out at some point, but don’t want to store them plain text. So instead, store the encrypted version and keep the key as safe as possible.

Hash functions are also great for signing data. For example, if you’re using HMAC, you sign a piece of data by taking a hash of the data concatenated with a known but not transmitted value (a secret value). So, you send the plain-text and the HMAC hash. Then, the receiver simply hashes the submitted data with the known value and checks to see if it matches the transmitted HMAC. If it’s the same, you know it wasn’t tampered with by a party without the secret value. This is commonly used in secure cookie systems by HTTP frameworks, as well as in message transmission of data over HTTP where you want some assurance of integrity in the data.

A key feature of cryptographic hash functions is that they should be very fast to create, and very difficult/slow to reverse (so much so that it’s practically impossible). This poses a problem with passwords. If you store sha512(password), you’re not doing a thing to guard against rainbow tables or brute force attacks. Remember, the hash function was designed for speed. So it’s trivial for an attacker to just run a dictionary through the hash function and test each result.

Adding a salt helps matters since it adds a bit of unknown data to the hash. So instead of finding anything that matches md5(foo), they need to find something that when added to the known salt produces md5(foo.salt) (which is very much harder to do). But it still doesn’t solve the speed problem since if they know the salt it’s just a matter of running the dictionary through.

So, there are ways of dealing with this. One popular method is called key strengthening (or key stretching). Basically, you iterate over a hash many times (thousands usually). This does two things. First, it slows down the runtime of the hashing algorithm significantly. Second, if implemented right (passing the input and salt back in on each iteration) actually increases the entropy (available space) for the output, reducing the chances of collisions. A trivial implementation is:

There are other, more standard implementations such as PBKDF2, BCrypt. But this technique is used by quite a few security related systems (such as PGP, WPA, Apache and OpenSSL).

The bottom line, hash(password) is not good enough. hash(password + salt) is better, but still not good enough… Use a stretched hash mechanism to produce your password hashes…

Do not under any circumstances feed the output of one hash directly back into the hash function:

The reason for this has to do with collisions. Remember that all hash functions have collisions because the possible output space (the number of possible outputs) is smaller than then input space. To see why, let’s look at what happens. To preface this, let’s make the assumption that there’s a 0.001% chance of collision from sha1() (it’s much lower in reality, but for demonstration purposes).

Now, hash1 has a probability of collision of 0.001%. But when we do the next hash2 = sha1(hash1);, all collisions of hash1 automatically become collisions of hash2. So now, we have hash1’s rate at 0.001%, and the 2nd sha1() call adds to that. So now, hash2 has a probability of collision of 0.002%. That’s twice as many chances! Each iteration will add another 0.001% chance of collision to the result. So, with 1000 iterations, the chance of collision jumped from a trivial 0.001% to 1%. Now, the degradation is linear, and the real probabilities are far smaller, but the effect is the same (an estimation of the chance of a single collision with md5 is about 1/(2128) or 1/(3×1038). While that seems small, thanks to the birthday attack it’s not really as small as it seems).

Instead, by re-appending the salt and password each time, you’re re-introducing data back into the hash function. So any collisions of any particular round are no longer collisions of the next round. So:

Has the same chance of collision as the native sha512 function. Which is what you want. Use that instead.

Read more here:
security – Fundamental difference between Hashing and …

BitLocker Drive Encryption Overview – technet.microsoft.com

BitLocker Drive Encryption is a data protection feature available Windows Server2008R2 and in some editions of Windows7. Having BitLocker integrated with the operating system addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers.

Data on a lost or stolen computer is vulnerable to unauthorized access, either by running a software-attack tool against it or by transferring the computer’s hard disk to a different computer. BitLocker helps mitigate unauthorized data access by enhancing file and system protections. BitLocker also helps render data inaccessible when BitLocker-protected computers are decommissioned or recycled.

BitLocker provides the most protection when used with a Trusted Platform Module (TPM) version1.2. The TPM is a hardware component installed in many newer computers by the computer manufacturers. It works with BitLocker to help protect user data and to ensure that a computer has not been tampered with while the system was offline.

On computers that do not have a TPM version1.2, you can still use BitLocker to encrypt the Windows operating system drive. However, this implementation will require the user to insert a USB startup key to start the computer or resume from hibernation, and it does not provide the pre-startup system integrity verification offered by BitLocker with a TPM.

In addition to the TPM, BitLocker offers the option to lock the normal startup process until the user supplies a personal identification number (PIN) or inserts a removable device, such as a USB flash drive, that contains a startup key. These additional security measures provide multifactor authentication and assurance that the computer will not start or resume from hibernation until the correct PIN or startup key is presented.

BitLocker can use a TPM to verify the integrity of early boot components and boot configuration data. This helps ensure that BitLocker makes the encrypted drive accessible only if those components have not been tampered with and the encrypted drive is located in the original computer.

BitLocker helps ensure the integrity of the startup process by taking the following actions:

To use BitLocker, a computer must satisfy certain requirements:

BitLocker is installed automatically as part of the operating system installation. However, BitLocker is not enabled until it is turned on by using the BitLocker setup wizard, which can be accessed from either the Control Panel or by right-clicking the drive in Windows Explorer.

At any time after installation and initial operating system setup, the system administrator can use the BitLocker setup wizard to initialize BitLocker. There are two steps in the initialization process:

When a local administrator initializes BitLocker, the administrator should also create a recovery password or a recovery key. Without a recovery key or recovery password, all data on the encrypted drive may be inaccessible and unrecoverable if there is a problem with the BitLocker-protected drive.

For detailed information about configuring and deploying BitLocker, see the Windows BitLocker Drive Encryption Step-by-Step Guide (http://go.microsoft.com/fwlink/?LinkID=140225).

BitLocker can use an enterprise’s existing Active Directory Domain Services (ADDS) infrastructure to remotely store recovery keys. BitLocker provides a wizard for setup and management, as well as extensibility and manageability through a Windows Management Instrumentation (WMI) interface with scripting support. BitLocker also has a recovery console integrated into the early boot process to enable the user or helpdesk personnel to regain access to a locked computer.

For more information about writing scripts for BitLocker, see Win32_EncryptableVolume (http://go.microsoft.com/fwlink/?LinkId=85983).

Many personal computers today are reused by people other than the computer’s initial owner or user. In enterprise scenarios, computers may be redeployed to other departments, or they might be recycled as part of a standard computer hardware refresh cycle.

On unencrypted drives, data may remain readable even after the drive has been formatted. Enterprises often make use of multiple overwrites or physical destruction to reduce the risk of exposing data on decommissioned drives.

BitLocker can help create a simple, cost-effective decommissioning process. By leaving data encrypted by BitLocker and then removing the keys, an enterprise can permanently reduce the risk of exposing this data. It becomes nearly impossible to access BitLocker-encrypted data after removing all BitLocker keys because this would require cracking 128-bit or 256-bit AES encryption.

BitLocker cannot protect a computer against all possible attacks. For example, if malicious users, or programs such as viruses or rootkits, have access to the computer before it is lost or stolen, they might be able to introduce weaknesses through which they can later access encrypted data. And BitLocker protection can be compromised if the USB startup key is left in the computer, or if the PIN or Windows logon password are not kept secret.

The TPM-only authentication mode is easiest to deploy, manage, and use. It might also be more appropriate for computers that are unattended or must restart while unattended. However, the TPM-only mode offers the least amount of data protection. If parts of your organization have data that is considered highly sensitive on mobile computers, consider deploying BitLocker with multifactor authentication on those computers.

For more information about BitLocker security considerations, see Data Encryption Toolkit for Mobile PCs (http://go.microsoft.com/fwlink/?LinkId=85982).

For servers in a shared or potentially non-secure environment, such as a branch office location, BitLocker can be used to encrypt the operating system drive and additional data drives on the same server.

By default, BitLocker is not installed with Windows Server2008R2. Add BitLocker from the Windows Server2008R2 Server Manager page. You must restart after installing BitLocker on a server. Using WMI, you can enable BitLocker remotely.

BitLocker is supported on Extensible Firmware Interface (EFI) servers that use a 64-bit processor architecture.

After the drive has been encrypted and protected with BitLocker, local and domain administrators can use the Manage BitLocker page in the BitLocker Drive Encryption item in Control Panel to change the password to unlock the drive, remove the password from the drive, add a smart card to unlock the drive, save or print the recovery key again, automatically unlock the drive, duplicate keys, and reset the PIN.

An administrator may want to temporarily disable BitLocker in certain scenarios, such as:

These scenarios are collectively referred to as the computer upgrade scenario. BitLocker can be enabled or disabled through the BitLocker Drive Encryption item in Control Panel.

The following steps are necessary to upgrade a BitLocker-protected computer:

Forcing BitLocker into disabled mode will keep the drive encrypted, but the drive master key will be encrypted with a symmetric key stored unencrypted on the hard disk. The availability of this unencrypted key disables the data protection offered by BitLocker but ensures that subsequent computer startups succeed without further user input. When BitLocker is enabled again, the unencrypted key is removed from the disk and BitLocker protection is turned back on. Additionally, the drive master key is keyed and encrypted again.

Moving the encrypted drive (that is, the physical disk) to another BitLocker-protected computer does not require any additional steps because the key protecting the drive master key is stored unencrypted on the disk.

For detailed information about disabling BitLocker, see Windows BitLocker Drive Encryption Step-by-Step Guide (http://go.microsoft.com/fwlink/?LinkID=140225).

A number of scenarios can trigger a recovery process, for example:

An administrator can also trigger recovery as an access control mechanism (for example, during computer redeployment). An administrator may decide to lock an encrypted drive and require that users obtain BitLocker recovery information to unlock the drive.

Using Group Policy, an IT administrator can choose which recovery methods to require, deny, or make optional for users who enable BitLocker. The recovery password can be stored in ADDS, and the administrator can make this option mandatory, prohibited, or optional for each user of the computer. Additionally, the recovery data can be stored on a USB flash drive.

The recovery password is a 48-digit, randomly generated number that can be created during BitLocker setup. If the computer enters recovery mode, the user will be prompted to type this password by using the function keys (F0 through F9). The recovery password can be managed and copied after BitLocker is enabled. Using the Manage BitLocker page in the BitLocker Drive Encryption item in Control Panel, the recovery password can be printed or saved to a file for future use.

A domain administrator can configure Group Policy to generate recovery passwords automatically and back them up to ADDS as soon as BitLocker is enabled. The domain administrator can also choose to prevent BitLocker from encrypting a drive unless the computer is connected to the network and ADDS backup of the recovery password is successful.

The recovery key can be created and saved to a USB flash drive during BitLocker setup; it can also be managed and copied after BitLocker is enabled. If the computer enters recovery mode, the user will be prompted to insert the recovery key into the computer.

The rest is here:
BitLocker Drive Encryption Overview – technet.microsoft.com

Comodo Disk Encryption Download – softpedia.com

Comodo Disk Encryption is a reliable application that protects your sensitive data by encrypting your drives using complex algorithms.

It provides you with two different methods of securing your information. Either you encrypt any drive partition that contains personal information using combinations of different hashing and encryption algorithms or simply mount the virtual partitions in your hard drive, then save your data.

Since the encryption process can be carried out with two different authentication types, namely Password and USB Stick, the application helps you to add an extra layer of security, thus protecting your critical data from unauthorized users.

When you launch Comodo Disk Encryption for the first time, you will notice that all your drives are automatically recognized (after a restart has been performed). When you click on a random partition, detailed information such as file system, free space, encryption method and total size are displayed in the bottom pane of the program.

The right-click menu enables you to easily encrypt or decrypt the selected partition, edit the available settings, as well as format it by modifying the file system to NTFS, FAT32 or FAT and the allocation unit size.

By accessing the Encrypt option, you are able to choose one of the available authentication types, then set the properties according to your whims such as hash algorithm and password.

The ‘Virtual Drives’ tab enables you to view all the mounted drives in your system and create, mount, remove or unmount them, as well as edit the encryption settings effortlessly.

In case you want to decrypt a drive, you will just have to choose the proper option from the context menu and bring back the partition to its original form so that the drive becomes accessible for any user.

Overall, Comodo Disk Encryption keeps all your sensitive data protected from hackers, thieves and online scammers by encrypting your hard disks with ease.

See the rest here:
Comodo Disk Encryption Download – softpedia.com

The Encrypting File System – technet.microsoft.com

By Roberta Bragg

An Overview of the Encrypting File SystemWhat EFS IsBasic How-tosPlanning for and Recovering Encrypted Files: Recovery PolicyHow EFS WorksKey Differences Between EFS on Windows 2000, Windows XP, and Windows Server 2003Misuse and Abuse of EFS and How to Avoid Data Loss or ExposureRemote Storage of Encrypted Files Using SMB File Shares and WebDAVBest Practices for SOHO and Small BusinessesEnterprise How-tosTroubleshootingRadical EFS: Using EFS to Encrypt Databases and Using EFS with Other Microsoft ProductsDisaster RecoveryOverviews and Larger ArticlesSummary

The Encrypting File System (EFS) is a component of the NTFS file system on Windows 2000, Windows XP Professional, and Windows Server 2003. (Windows XP Home doesn’t include EFS.) EFS enables transparent encryption and decryption of files by using advanced, standard cryptographic algorithms. Any individual or program that doesn’t possess the appropriate cryptographic key cannot read the encrypted data. Encrypted files can be protected even from those who gain physical possession of the computer that the files reside on. Even persons who are authorized to access the computer and its file system cannot view the data. While other defensive strategies should be used, and encryption isn’t the correct countermeasure for every threat, encryption is a powerful addition to any defensive strategy. EFS is the built-in file encryption tool for Windows file systems.

However, every defensive weapon, if used incorrectly, carries the potential for harm. EFS must be understood, implemented appropriately, and managed effectively to ensure that your experience, the experience of those to whom you provide support, and the data you wish to protect aren’t harmed. This document will

Provide an overview and pointers to resources on EFS.

Point to implementation strategies and best practices.

Name the dangers and counsel mitigation and prevention from harm.

Many online and published resources on EFS exist. The major sources of information are the Microsoft resource kits, product documentation, white papers, and Knowledge Base articles. This paper provides a brief overview of major EFS issues. Wherever possible, it doesn’t rework existing documentation; rather, it provides links to the best resources. In short, it maps the list of desired knowledge and instruction to the actual documents where they can be found. In addition, the paper catalogs the key elements of large documents so that you’ll be able to find the information you need without having to work your way through hundreds of pages of information each time you have a new question.

The paper discusses the following key EFS knowledge areas:

What EFS is

Basic how-tos, such as how to encrypt and decrypt files, recover encrypted files, archive keys, manage certificates, and back up files, and how to disable EFS

How EFS works and EFS architecture and algorithms

Key differences between EFS on Windows 2000, Windows XP, and Windows Server 2003

Misuse and abuse of EFS and how to avoid data loss or exposure

Remote storage of encrypted files using SMB file shares and WebDAV

Best practices for SOHO and small businesses

Enterprise how-tos: how to implement data recovery strategies with PKI and how to implement key recovery with PKI

Troubleshooting

Radical EFS: using EFS to encrypt databases and using EFS with other Microsoft products

Disaster recovery

Where to download EFS-specific tools

Using EFS requires only a few simple bits of knowledge. However, using EFS without knowledge of best practices and without understanding recovery processes can give you a mistaken sense of security, as your files might not be encrypted when you think they are, or you might enable unauthorized access by having a weak password or having made the password available to others. It might also result in a loss of data, if proper recovery steps aren’t taken. Therefore, before using EFS you should read the information links in the section “Misuse and Abuse of EFS and How to Avoid Data Loss or Exposure.” The knowledge in this section warns you where lack of proper recovery operations or misunderstanding can cause your data to be unnecessarily exposed. To implement a secure and recoverable EFS policy, you should have a more comprehensive understanding of EFS.

You can use EFS to encrypt files stored in the file system of Windows 2000, Windows XP Professional, and Windows Server 2003 computers. EFS isn’t designed to protect data while it’s transferred from one system to another. EFS uses symmetric (one key is used to encrypt the files) and asymmetric (two keys are used to protect the encryption key) cryptography. An excellent primer on cryptography is available in the Windows 2000 Resource Kit as is an introduction to Certificate Services. Understanding both of these topics will assist you in understanding EFS.

A solid overview of EFS and a comprehensive collection of information on EFS in Windows 2000 are published in the Distributed Systems Guide of the Windows 2000 Server Resource Kit. This information, most of which resides in Chapter 15 of that guide, is published online at http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/default.mspx. (On this site’s page, use the TOC to go to the Distributed Systems Guide, Distributed Security, Encrypting File System.)

There are differences between EFS in Windows 2000, Windows XP Professional, and Windows Server 2003. The Windows XP Professional Resource Kit explains the differences between Windows 2000 and Windows XP Professionals implementation of EFS, and the document “Encrypting File System in Windows XP and Windows Server 2003” (http://www.microsoft.com/technet/prodtechnol/winxppro/deploy/cryptfs.mspx) details Windows XP and Windows Server 2003 modifications. The section below, “Key Differences between EFS on Windows 2000, Windows XP, and Windows Server 2003,” summarizes these differences.

The following are important basic facts about EFS:

EFS encryption doesn’t occur at the application level but rather at the file-system level; therefore, the encryption and decryption process is transparent to the user and to the application. If a folder is marked for encryption, every file created in or moved to the folder will be encrypted. Applications don’t have to understand EFS or manage EFS-encrypted files any differently than unencrypted files. If a user attempts to open a file and possesses the key to do so, the file opens without additional effort on the user’s part. If the user doesn’t possess the key, they receive an “Access denied” error message.

File encryption uses a symmetric key, which is then itself encrypted with the public key of a public key encryption pair. The related private key must be available in order for the file to be decrypted. This key pair is bound to a user identity and made available to the user who has possession of the user ID and password. If the private key is damaged or missing, even the user that encrypted the file cannot decrypt it. If a recovery agent exists, then the file may be recoverable. If key archival has been implemented, then the key may be recovered, and the file decrypted. If not, the file may be lost. EFS is an excellent file encryption systemthere is no “back door.”

File encryption keys can be archived (e.g. exported to a floppy disk) and kept in a safe place to ensure recovery should keys become damaged.

EFS keys are protected by the user’s password. Any user who can obtain the user ID and password can log on as that user and decrypt that user’s files. Therefore, a strong password policy as well as strong user education must be a component of each organization’s security practices to ensure the protection of EFS-encrypted files.

EFS-encrypted files don’t remain encrypted during transport if saved to or opened from a folder on a remote server. The file is decrypted, traverses the network in plaintext, and, if saved to a folder on the local drive that’s marked for encryption, is encrypted locally. EFS-encrypted files can remain encrypted while traversing the network if they’re being saved to a Web folder using WebDAV. This method of remote storage isn’t available for Windows 2000.

EFS uses FIPS 140-evaluated Microsoft Cryptographic Service Providers (CSPcomponents which contain encryption algorithms for Microsoft products).

EFS functionality is straightforward, and you can find step-by-step instructions in many documents online. Links to specific articles for each possible EFS function, as well as some documents which summarize multiple functionality, follow. If the document is a Knowledge Base article, the Knowledge Base number appears in parentheses after the article title.

Encrypting and Decrypting

The process of encrypting and decrypting files is very straightforward, but its important to decide what to encrypt and to note differences in EFS based on the operating system.

Sharing Encrypted Files

The GUI for sharing encrypted files is available only in Windows XP and Windows Server 2003.

A recovery policy can be an organization’s security policy instituted to plan for proper recovery of encrypted files. It’s also the policy enforced by Local Security Policy Public Key Policy or Group Policy Public Key Policy. In the latter, the recovery policy specifies how encrypted files may be recovered should the user private key be damaged or lost and the encrypted file unharmed. Recovery certificate(s) are specified in the policy. Recovery can be either data recovery (Windows 2000, Windows XP Professional, and Windows Server 2003) or key recovery (Windows Server 2003 with Certificate Services). Windows 2000 EFS requires the presence of a recovery agent (no recovery agent, no file encryption), but Windows XP and Windows Server 2003 don’t. By default, Windows 2000 and Windows Server 2003 have default recovery agents assigned. Windows XP Professional doesn’t.

The data recovery process is simple. The user account bound to the recovery agent certificate is used to decrypt the file. The file should then be delivered in a secure manner to the file owner, who may then encrypt the file. Recovery via automatically archived keys is available only with Windows Server 2003 Certificate Services. Additional configuration beyond the installation of Certificate Services is required. In either case, it’s most important that a written policy and procedures for recovery are in place. These procedures, if well written and if followed, can ensure that recovery keys and agents are available for use and that recovery is securely carried out. Keep in mind that there are two definitions for “recovery policy.” The first definition refers to a written recovery policy and procedures that describe the who, what, where, and when of recovery, as well as what steps should be taken to ensure recovery components are available. The second definition, which is often referred to in the documents below, is the Public Key Policy that’s part of the Local Security Policy on stand-alone systems, or Group Policy in a domain. It can specify which certificates are used for recovery, as well as other aspects of Public Key Policies in the domain. You can find more information in the following documents:

Disabling or Preventing Encryption

You may decide that you don’t wish users to have the ability to encrypt files. By default, they do. You may decide that specific folders shouldn’t contain encrypted files. You may also decide to disable EFS until you can implement a sound EFS policy and train users in proper procedures. There are different ways of disabling EFS depending on the operating system and the desired effect:

System folders cannot be marked for encryption. EFS keys aren’t available during the boot process; thus, if system files were encrypted, the system file couldn’t boot. To prevent other folders being marked for encryption, you can mark them as system folders. If this isn’t possible, then a method to prevent encryption within a folder is defined in “Encrypting File System.”

NT 4.0 doesn’t have the ability to use EFS. If you need to disable EFS for Windows 2000 computers joined to a Windows NT 4.0 domain, see “Need to Turn Off EFS on a Windows 2000-Based Computer in Windows NT 4.0-Based Domain” (288579). The registry key mentioned can also be used to disable EFS in Window XP Professional and Windows Server 2003.

Disabling EFS for Windows XP Professional can also be done by clearing the checkbox for the property page of the Local Security Policy Public Key Policy. EFS can be disabled in XP and Windows Server 2003 computers joined in a Windows Server 2003 domain by clearing the checkbox for the property pages of the domain or organizational unit (OU) Group Policy Public Key Policy.

“HOW TO: Disable/Enable EFS on a Stand-Alone Windows 2000-Based Computer” (243035) details how to save the recovery agent’s certificate and keys when disabling EFS so that you can enable EFS at a future date.

“HOW TO: Disable EFS for All Computers in a Windows 2000-Based Domain” (222022) provides the best instruction set and clearly defines the difference between deleted domain policy (an OU-based policy or Local Security Policy can exist) versus Initialize Empty Policy (no Windows 2000 EFS encryption is possible throughout the domain).

Special Operations

Let enough people look at anything, and you’ll find there are questions that are just not answered by existing documentation or options. A number of these issues, third-party considerations, and post introduction issues can be resolved by reviewing the following articles.

Specifications for the use of a third-party Certification Authority (CA) can be found at “Third-Party Certification Authority Support for Encrypting File System” (273856). If you wish to use third-party CA certificates for EFS, you should also investigate certificate revocation processing. Windows 2000 EFS certificates aren’t checked for revocation. Windows XP and Windows Server 2003 EFS certificates are checked for revocation in some cases, and third-party certificates may be rejected. Information about certificate revocation handling in EFS can be found in the white paper “Encrypting File System in Windows XP and Windows Server 2003”.

When an existing plaintext file is marked for encryption, it’s first copied to a temporary file. When the process is complete, the temporary file is marked for deletion, which means portions of the original file may remain on the disk and could potentially be accessible via a disk editor. These bits of data, referred to as data shreds or remanence, may be permanently removed by using a revised version of the cipher.exe tool. The tool is part of Service Pack 3 (SP3) for Windows 2000 and is included in Windows Server 2003. Instructions for using the tool, along with the location of a downloadable version, can be found in “HOW TO: Use Cipher.exe to Overwrite Deleted Data in Windows” (315672) and in “Cipher.exe Security Tool for the Encrypting File System” (298009).

How to make encrypted files display in green in Windows Explorer is explained in “HOW TO: Identify Encrypted Files in Windows XP” (320166).

“How to Enable the Encryption Command on the Shortcut Menu” (241121) provides a registry key to modify for this purpose.

You may wish to protect printer spool files or hard copies of encrypted files while they’re printing. Encryption is transparent to the printing process. If you have the right (possess the key) to decrypt the file and a method exists for printing files, the file will print. However, two issues should concern you. First, if the file is sensitive enough to encrypt, how will you protect the printed copy? Second, the spool file resides in the system32SpoolPrinters folder. How can you protect it while its there? You could encrypt that folder, but that would slow printing enormously. The Windows 2000 Resource Kit proposes a separate printer for the printing of these files and how to best secure that printer in the Distributed Systems, Distributed Security, Encrypting Files System, Printing EFS Files section.

To understand EFS, and therefore anticipate problems, envision potential attacks, and troubleshoot and protect EFS-encrypted files, you should understand the architecture of EFS and the basic encryption, decryption, and recovery algorithms. Much of this information is in the Windows 2000 Resource Kit Distributed Systems Guide, the Windows XP Professional Resource Kit, and the white paper, “Encrypting File System in Windows XP and Windows Server 2003.” Many of the algorithms are also described in product documentation. The examples that follow are from the Windows XP Professional Resource Kit:

A straightforward discussion of the components of EFS, including the EFS service, EFS driver, and the File System Run Time Library, is found in “Components of EFS,” a subsection of Chapter 17, “Encrypting File System” in the Windows XP Professional Resource Kit.

A description of the encryption, decryption, and recovery algorithms EFS uses is in the Resource Kit section “How Files Are Encrypted.” This section includes a discussion of the file encryption keys (FEKs) and file Data Recovery Fields and Data Decryption Fields used to hold FEKs encrypted by user and recovery agent public keys.

“Working with Encryption” includes how-to steps that define the effect of decisions made about changing the encryption properties of folders. The table defines what happens for each file (present, added later, or copied to the folder) for the choice “This folder only” or the option “This folder, subfolders and files.”

“Remote EFS Operations on File Shares and Web Folders” defines what happens to encrypted files and how to enable remote storage.

EFS was introduced in Windows 2000. However, there are differences when compared with Windows XP Professional EFS and Windows Server 2003 EFS, including the following:

You can authorize additional users to access encrypted files (see the section “Sharing Encrypted Files”, above). In Windows 2000, you can implement a programmatic solution for the sharing of encrypted files; however, no interface is available. Windows XP and Windows Server 2003 have this interface.

Offline files can be encrypted. See “HOW TO: Encrypt Offline Files to Secure Data in Windows XP.”

Data recovery agents are recommended but optional. XP doesn’t automatically include a default recovery agent. XP will take advantage of an existing Windows 2000 domain-level recovery agent if one is present, but the lack of a domain recovery agent wont prevent encryption of files on an XP system. A self-signed recovery agent certificate can be requested by using the cipher /R:filename command, where filename is the name that will be used to create a *.cer file to hold the certificate and a *.pfx file to hold the certificate and private key.

The Triple DES (3DES) encryption algorithm can be used to replace Data Encryption Standard X (DESX), and after XP SP1, Advanced Encryption Standard (AES) becomes the default encryption algorithm for EFS.

For Windows XP and Windows Server 2003 local accounts, a password reset disk can be used to safely reset a user’s password. (Domain passwords cannot be reset using the disk.) If an administrator uses the “reset password” option from the user’s account in the Computer Management console users container, EFS files won’t be accessible. If users change the password back to the previous password, they can regain access to encrypted files. To create a password reset disk and for instructions about how to use a password reset disk, see product documentation and/or the article “HOW TO: Create and Use a Password Reset Disk for a Computer That Is Not a Domain Member in Windows XP” (305478).

Encrypted files can be stored in Web folders. The Windows XP Professional Resource Kit section “Remote EFS Operations in a Web Folder Environment” explains how.

Windows Server 2003 incorporates the changes introduced in Windows XP Professional and adds the following:

A default domain Public Key recovery policy is created, and a recovery agent certificate is issued to the Administrator account.

Certificate Services include the ability for customization of certificate templates and key archival. With appropriate configuration, archival of user EFS keys can be instituted and recovery of EFS-encrypted files can be accomplished by recovering the user’s encryption keys instead of decrypting via a file recovery agent. A walk-through providing a step-by-step configuration of Certificate Services for key archival is available in “Certificate Services Example Implementation: Key Archival and Recovery.”

Windows Server 2003 enables users to back up their EFS key(s) directly from the command line and from the details property page by clicking a “Backup Keys” button.

Unauthorized persons may attempt to obtain the information encrypted by EFS. Sensitive data may also be inadvertently exposed. Two possible causes of data loss or exposure are misuse (improper use of EFS) or abuse (attacks mounted against EFS-encrypted files or systems where EFS-encrypted files exist).

Inadvertent Problems Due to Misuse

Several issues can cause problems when using EFS. First, when improperly used, sensitive files may be inadvertently exposed. In many cases this is due to improper or weak security policies and a failure to understand EFS. The problem is made all the worse because users think their data is secure and thus may not follow usual precautionary methods. This can occur in several scenarios:

If, for example, users copy encrypted files to FAT volumes, the files will be decrypted and thus no longer protected. Because the user has the right to decrypt files that they encrypted, the file is decrypted and stored in plaintext on the FAT volume. Windows 2000 gives no warning when this happens, but Windows XP and Windows Server 2003 do provide a warning.

If users provide others with their passwords, these people can log on using these credentials and decrypt the user’s encrypted files. (Once a user has successfully logged on, they can decrypt any files the user account has the right to decrypt.)

If the recovery agent’s private key isn’t archived and removed from the recovery agent profile, any user who knows the recovery agent credentials can log on and transparently decrypt any encrypted files.

By far, the most frequent problem with EFS occurs when EFS encryption keys and/or recovery keys aren’t archived. If keys aren’t backed up, they cannot be replaced when lost. If keys cannot be used or replaced, data can be lost. If Windows is reinstalled (perhaps as the result of a disk crash) the keys are destroyed. If a user’s profile is damaged, then keys are destroyed. In these, or in any other cases in which keys are damaged or lost and backup keys are unavailable, then encrypted files cannot be decrypted. The encryption keys are bound to the user account, and a new iteration of the operating system means new user accounts. A new user profile means new user keys. If keys are archived, or exported, they can be imported to a new account. If a revocation agent for the files exists, then that account can be used to recover the files. However, in many cases in which keys are destroyed, both user and revocation keys are absent and there is no backup, resulting in lost data.

Additionally, many other smaller things may render encrypted files unusable or expose some sensitive data, such as the following:

Finally, keeping data secure takes more than simply encrypting files. A systems-wide approach to security is necessary. You can find several articles that address best practices for systems security on the TechNet Best Practices page at http://www.microsoft.com/technet/archive/security/bestprac/bpent/sec2/secentbb.mspx. The articles include

Attacks and Countermeasures: Additional Protection Mechanisms for Encrypted Files

Any user of encrypted files should recognize potential weaknesses and avenues of attack. Just as its not enough to lock the front door of a house without considering back doors and windows as avenues for a burglar, encrypting files alone isn’t enough to ensure confidentiality.

Use defense in depth and use file permissions. The use of EFS doesn’t obviate the need to use file permissions to limit access to files. File permissions should be used in addition to EFS. If users have obtained encryption keys, they can import them to their account and decrypt files. However, if the user accounts are denied access to the file, the users will be foiled in their attempts to gain this sensitive information.

Use file permissions to deny delete. Encrypted files can be deleted. If attackers cannot decrypt the file, they may choose to simply delete it. While they don’t have the sensitive information, you don’t have your file.

Protect user credentials. If an attacker can discover the identity and password of a user who can decrypt a file, the attacker can log on as that user and view the files. Protecting these credentials is paramount. A strong password policy, user training on devising strong passwords, and best practices on protecting these credentials will assist in preventing this type of attack. An excellent best practices approach to password policy can be found in the Windows Server 2003 product documentation. If account passwords are compromised, anyone can log on using the user ID and password. Once user have successfully logged on, they can decrypt any files the user account has the right to decrypt. The best defense is a strong password policy, user education, and the use of sound security practices.

Protect recovery agent credentials. Similarly, if an attacker can log on as a recovery agent, and the recovery agent private key hasn’t been removed, the attacker can read the files. Best practices dictate the removal of the recovery agent keys, the restriction of this account’s usage to recovery work only, and the careful protection of credentials, among other recovery policies. The sections about recovery and best practices detail these steps.

Seek out and manage areas where plaintext copies of the encrypted files or parts of the encrypted files may exist. If attackers have possession of, or access to, the computer on which encrypted files reside, they may be able to recover sensitive data from these areas, including the following:

Data shreds (remanence) that exist after encrypting a previously unencrypted file (see the “Special Operations” section of this paper for information about using cipher.exe to remove them)

The paging file (see “Increasing Security for Open Encrypted Files,” an article in the Windows XP Professional Resource Kit, for instructions and additional information about how to clear the paging file on shutdown)

Hibernation files (see “Increasing Security for Open Encrypted Files” at http://technet.microsoft.com/library/bb457116.aspx)

Temporary files (to determine where applications store temporary files and encrypt these folders as well to resolve this issue

Printer spool files (see the “Special Operations” section)

Provide additional protection by using the System Key. Using Syskey provides additional protection for password values and values protected in the Local Security Authority (LSA) Secrets (such as the master key used to protect user’s cryptographic keys). Read the article “Using the System Key” in the Windows 2000 Resource Kit’s Encrypting File System chapter. A discussion of the use of Syskey, and possible attacks against a Syskey-protected Windows 2000 computer and countermeasures, can be found in the article “Analysis of Alleged Vulnerability in Windows 2000 Syskey and the Encrypting File System.”

If your policy is to require that data is stored on file servers, not on desktop systems, you will need to choose a strategy for doing so. Two possibilities existeither storage in normal shared folders on file servers or the use of web folders. Both methods require configuration, and you should understand their benefits and risks.

If encrypted files are going to be stored on a remote server, the server must be configured to do so, and an alternative method, such as IP Security (IPSec) or Secure Sockets Layer (SSL), should be used to protect the files during transport. Instructions for configuring the server are discussed in “Recovery of Encrypted Files on a Server” (283223) and “HOW TO: Encrypt Files and Folders on a Remote Windows 2000 Server” (320044). However, the latter doesn’t mention a critical step, which is that the remote server must be trusted for delegation in Active Directory. Quite a number of articles can be found, in fact, that leave out this step. If the server isn’t trusted for delegation in Active Directory, and a user attempts to save the file to the remote server, an “Access Denied” error message will be the result.

If you need to store encrypted files on a remote server in plaintext (local copies are kept encrypted), you can. The server must, however, be configured to make this happen. You should also realize that once the server is so configured, no encrypted files can be stored on it. See the article “HOW TO: Prevent Files from Being Encrypted When Copied to a Server” (302093).

You can store encrypted files in Web folders when using Windows XP or Windows Server 2003. The Windows XP Professional Resource Kit section “Remote EFS Operations in a Web Folder Environment” explains how.

If your Web applications need to require authentication to access EFS files stored in a Web folder, the code for using a Web folder to store EFS files and require authentication to access them is detailed in “HOW TO: Use Encrypting File System (EFS) with Internet Information Services” (243756).

Once you know the facts about EFS and have decided how you are going to use it, you should use these documents as a checklist to determine that you have designed the best solution.

By default, EFS certificates are self-signed; that is, they don’t need to obtain certificates from a CA. When a user first encrypts a file, EFS looks for the existence of an EFS certificate. If one isn’t found, it looks for the existence of a Microsoft Enterprise CA in the domain. If a CA is found, a certificate is requested from the CA; if it isn’t, a self-signed certificate is created and used. However, more granular control of EFS, including EFS certificates and EFS recovery, can be established if a CA is present. You can use Windows 2000 or Windows Server 2003 Certificate Services. The following articles explain how.

Troubleshooting EFS is easier if you understand how EFS works. There are also well known causes for many of the common problems that arise. Here are a few common problems and their solutions:

You changed your user ID and password and can no longer decrypt your files. There are two possible approaches to this problem, depending on what you did. First, if the user account was simply renamed and the password reset, the problem may be that you’re using XP and this response is expected. When an administrator resets an XP user’s account password, the account’s association with the EFS certificate and keys is removed. Changing the password to the previous password can reestablish your ability to decrypt your files. For more information, see “User Cannot Gain Access to EFS Encrypted Files After Password Change or When Using a Roaming Profile” (331333), which explains how XP Professional encrypted files cannot be decrypted, even by the original account, if an administrator has changed the password. Second, if you truly have a completely different account (your account was damaged or accidentally deleted), then you must either import your keys (if you’ve exported them) or ask an administrator to use recovery agent keys (if implemented) to recover the files. Restoring keys is detailed in “HOW TO: Restore an Encrypting File System Private Key for Encrypted Data Recovery in Windows 2000” (242296). How to use a recovery agent to recover files is covered in “Five-Minute Security AdvisorRecovering Encrypted Data Using EFS.”

Read the original here:
The Encrypting File System – technet.microsoft.com

FBI cant break the encryption on Texas shooters smartphone

Getty Images | Peter Dazeley

The Federal Bureau of Investigation has not been able to break the encryption on the phone owned by a gunman who killed 26 people in a Texas church on Sunday.

“We are unable to get into that phone,” FBI Special Agent Christopher Combs said in a press conference yesterday (see video).

Combs declined to say what kind of phone was used by gunman Devin Kelley, who killed himself after the mass shooting.”I’m not going to describe what phone it is because I don’t want to tell every bad guy out there what phone to buy, to harass our efforts on trying to find justice here,” Combs said.

The phone is an iPhone,The Washington Post reported today:

After the FBI said it was dealing with a phone it couldnt open, Apple reached out to the bureau to learn if the phone was an iPhone and if the FBI was seeking assistance. Late Tuesday an FBI official responded, saying it was an iPhone but the agency was not asking anything of the company at this point. Thats because experts at the FBIs lab in Quantico, Va., are trying to determine if there are other methods to access the phones data, such as through cloud storage backups or linked laptops, these people said.

The US government has been calling on phone makers to weaken their devices’ security, but companies have refused to do so.Last year, Apple refused to help the government unlock and decrypt the San Bernardino gunman’s iPhone, but the FBI ended up paying hackers for a vulnerability that it used to access data on the device.

Deliberately weakening the security of consumer devices would help criminals target innocent people who rely on encryption to ensure their digital safety, Apple and others have said.

“With the advance of the technology in the phones and the encryptions, law enforcement, whether it’s at the state, local, or the federal level, is increasingly not able to get into these phones,” Combs said yesterday.

Combs said he has no idea how long it will take before the FBI can break the encryption.”I can assure you we are working very hard to get into the phone, and that will continue until we find an answer,” he said. The FBI is also examining “other digital media” related to the gunman, he said.

There are currently “thousands of seized devices sit[ting] in storage, impervious to search warrants,” Deputy Attorney General Rod Rosenstein said last month.

More:
FBI cant break the encryption on Texas shooters smartphone

DOJ: Strong encryption that we dont have access to is …

Enlarge / US Deputy Attorney General Rod Rosenstein delivers remarks at the 65th Annual Attorney General’s Awards Ceremony at the Daughters of the American Revolution Constitution Hall October 25, 2017 in Washington, DC.

Just two days after the FBI said it could not get into the Sutherland Springs shooter’s seized iPhone, Politico Pro published a lengthy interview with a top Department of Justice official who has become the “governments unexpected encryption warrior.”

According to the interview, which was summarized and published in transcript form on Thursday for subscribers of the website, Deputy Attorney General Rod Rosenstein indicated that the showdown between the DOJ and Silicon Valley is quietly intensifying.

“We have an ongoing dialogue with a lot of tech companies in a variety of different areas,” he told Politico Pro. “There’s some areas where they are cooperative with us. But on this particular issue of encryption, the tech companies are moving in the opposite direction. They’re moving in favor of more and more warrant-proof encryption.”

While the battle against encryption has been going on within federal law enforcement circles since at least the early 1990s, Rosenstein has been the most outspoken DOJ official on this issue in recent months.

The DOJ’s number two has given multiple public speeches in which he has called for “responsible encryption.” The interview with Politico Pro represents the clearest articulation of the DOJs position on this issue, and it suggests that a redux of the 2016 FBI v. Apple showdown is inevitable in the near future.

“I want our prosecutors to know that, if there’s a case where they believe they have an appropriate need for information and there is a legal avenue to get it, they should not be reluctant to pursue it,” Rosenstein said. “I wouldn’t say we’re searching for a case. I’d say were receptive, if a case arises, that we would litigate.”

What Rosenstein didn’t note, however, is that the DOJ and its related agencies, including the FBI, are not taking encryption lying down.

The FBI maintains an office, known as the National Domestic Communications Assistance Center(NDCAC), which actively provides technical assistance to local law enforcement in high profile cases.

In its most recently published minutes from May 2017, the NDCAC said that one of its goals is to make such commercial tools, like Cellebrite’s services, “more widely available” to state and local law enforcement. Earlier this year, the NDCAC provided money to Miami authorities to pay Cellebrite to successfully get into a seized iPhone in a local sextortion case.

In the interview, Rosenstein also said he “favors strong encryption.”

“I favor strong encryption, because the stronger the encryption, the more secure data is against criminals who are trying to commit fraud,” he explained. “And I’m in favor of that, because that means less business for us prosecuting cases of people who have stolen data and hacked into computer networks and done all sorts of damage. So I’m in favor of strong encryption.”

“This is, obviously, a related issue, but it’s distinct, which is, what about cases where people are using electronic media to commit crimes? Having access to those devices is going to be critical to have evidence that we can present in court to prove the crime. I understand why some people merge the issues. I understand that they’re related. But I think logically, we have to look at these differently. People want to secure their houses, but they still need to get in and out. Same issue here.”

He later added that the claim that the “absolutist position” that strong encryption should be by definition, unbreakable, is “unreasonable.”

“And I think it’s necessary to weigh law enforcement equities in appropriate cases against the interest in security,” he said.

The DOJ’s position runs counter to the consensus of information security experts, who say that it is impossible to build the strongest encryption system possible that would also allow the government access under certain conditions.

“Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications,” Bruce Schneier, a well-known cryptographer, wrote last year.

“In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.”

Rosenstein closed his interview by noting that he understands re-engineering encryption to accommodate government may make it weaker.

“And I think that’s a legitimate issue that we can debatehow much risk are we willing to take in return for the reward?” he said.

“My point is simply that I think somebody needs to consider what’s on the other side of the balance. There is a cost to having impregnable security, and we’ve talked about some of the aspects of that. The cost is that criminals are going to be able to get away with stuff, and that’s going to prevent us in law enforcement from holding them accountable.”

See the article here:
DOJ: Strong encryption that we dont have access to is …

Trumps DOJ tries to rebrand weakened encryption as …

A high-ranking Department of Justice official took aim at encryption of consumer products today, saying that encryption creates “law-free zones” and should be scaled back by Apple and other tech companies. Instead of encryption that can’t be broken, tech companies should implement “responsible encryption” that allows law enforcement to access data, he said.

“Warrant-proof encryption defeats the constitutional balance by elevating privacy above public safety,” Deputy Attorney General Rod Rosenstein said in a speech at the US Naval Academy today (transcript). “Encrypted communications that cannot be intercepted and locked devices that cannot be opened are law-free zones that permit criminals and terrorists to operate without detection by police and without accountability by judges and juries.”

Rosenstein was nominated by President Donald Trump to be the DOJ’s second-highest-ranking official, after Attorney General Jeff Sessions. He was confirmed by the Senate in April.

Rosenstein’s speech makes several references to Apple, continuing a battle over encryption between Apple and the US government that goes back to the Obama administration. Last year, Apple refused to help the government unlock and decrypt the San Bernardino gunman’s iPhone, but the FBI ended up paying hackers fora vulnerabilitythat it used to access data on the device.

“Fortunately, the government was able to access data on that iPhone without Apple’s assistance,” Rosenstein said. “But the problem persists. Today, thousands of seized devices sit in storage, impervious to search warrants.”

“If companies are permitted to create law-free zones for their customers, citizens should understand the consequences,” he also said. “When police cannot access evidence, crime cannot be solved. Criminals cannot be stopped and punished.”

We asked Apple for a response to Rosenstein’s speech and will update this story if we get one.

Separately, state lawmakers in New York and California have proposed legislationto prohibit the sale of smartphones with unbreakable encryption.

Despite his goal of giving law enforcement access to encrypted data on consumer products, Rosenstein acknowledged the importance of encryption to the security of computer users. He said that “encryption is a foundational element of data security and authentication,” that “it is essential to the growth and flourishing of the digital economy,” and that “we in law enforcement have no desire to undermine it.”

But Rosenstein complained that “mass-market products and services incorporating warrant-proof encryption are now the norm,” that instant-messaging service encryption cannot be broken by police, and that smartphone makers have “engineer[ed] away” the ability to give police access to data.

Apple CEO Tim Cook has argued in the past that the intentional inclusion of vulnerabilities in consumer products wouldn’t just help law enforcement solve crimesit would also help criminals hack everyday people who rely on encryption to ensure their digital safety.

Rosenstein claimed that this problem can be solved with “responsible encryption.” He said:

Responsible encryption is achievable. Responsible encryption can involve effective, secure encryption that allows access only with judicial authorization. Such encryption already exists. Examples include the central management of security keys and operating system updates; the scanning of content, like your e-mails, for advertising purposes; the simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop.

No one calls any of those functions a “back door.” In fact, those capabilities are marketed and sought out by many users.

It’s not clear exactly how Rosenstein would implement his desired responsible encryption.

Rosenstein’s”key recovery when a user forgets the password to decrypt a laptop” reference seems to refer to Apple and Microsoft providing the ability to store recovery keys in the cloud. But users who encrypt Mac or Windows laptops aren’t required to do thisthey can store the keys locally only if they prefer. To guarantee law enforcement access in this scenario, people who encrypt laptops would have to be forced to store their keys in the cloud. Alternatively, Apple and Microsoft would have to change the way their disk encryption systems work, overriding the consumer’s preference to have an encrypted system that cannot be accessed by anyone else.

Rosenstein gave some further insight into how “responsible encryption” might work in this section of his speech:

We know from experience that the largest companies have the resources to do what is necessary to promote cybersecurity while protecting public safety. A major hardware provider, for example, reportedly maintains private keys that it can use to sign software updates for each of its devices. That would present a huge potential security problem, if those keys were to leak. But they do not leak, because the company knows how to protect what is important. Companies can protect their ability to respond to lawful court orders with equal diligence.

Of course, there are many examples of companies leaking sensitive data due to errors or serious vulnerabilities. The knowledge that errors will happen at some point explains why technology companies take so many precautions to protect customer data. Maintaining a special system that lets third parties access data that would otherwise only be accessible by its owner increases the risk that sensitive data will get into the wrong hands.

Rosenstein claimed that “responsible encryption can protect privacy and promote security without forfeiting access for legitimate law enforcement needs supported by judicial approval.” But he doubts that tech companies will do so unless forced to:

Technology companies almost certainly will not develop responsible encryption if left to their own devices. Competition will fuel a mindset that leads them to produce products that are more and more impregnable. That will give criminals and terrorists more opportunities to cause harm with impunity.

“Allow me to conclude with this thought,” Rosenstein said just before wrapping up his speech. “There is no constitutional right to sell warrant-proof encryption. If our society chooses to let businesses sell technologies that shield evidence even from court orders, it should be a fully-informed decision.”

More:
Trumps DOJ tries to rebrand weakened encryption as …