Hacker Newsnew | past | comments | ask | show | jobs light | darkhn

> Installing software by piping from curl to bash is obviously a bad idea and a knowledgable user will most likely check the content first

It's funny how so many obvious things are all but obvious when you think a little bit more about it. Interesting read on the subject: https://sandstorm.io/news/2015-09-24-is-curl-bash-insecure-p...

(I don't want to enter the `curl | bash` good or bad rabbit hole; just that the topic cannot be just dismissed as "obvious")


For me personally, the security aspect isn't even the reason I dislike curl | bash installers. There's no standard behaviour for curl|sh, and it re-introduces many problems that were solved by package managers decades ago.

Each time I'm about to run `curl https://install.newhotness.io|sudo sh` I'm left with the following questions:

* Will it work? I'm on Arch Linux. This installer was probably tested on OSX and Ubuntu and deemed "portable". I've seen curl|sudo sh installers trying to apt-get installing dependencies.

* Where does it place files? I've got a ~/.meteor folder that's 784MB. Why would you install software libraries and binaries to $HOME? I now have to tell my backup tool to ignore it when backing up my home partition, great. The FHS was established for a reason, people.

* Correlary to the previous question: How do I uninstall it? Maybe all files were installed to a hidden folder in ~/, maybe there's some stuff in /opt, or /usr/local, or who knows where else. If the installer doesn't implement uninstall functionality itself, I now have to go hunt for the stuff it placed in my filesystem.

I understand the need to be able to distribute your software without having to implement native packaging for OSX/Debian/Ubuntu/Red Hat/Fedora/Arch/...

Docker solved this problem. If you're feeling lazy when it comes to distributing your app, just ship a Docker container and be done with it.


> Correlary to the previous question: How do I uninstall it? Maybe all files were installed to a hidden folder in ~/, maybe there's some stuff in /opt, or /usr/local, or who knows where else.

How does one uninstall stuff on Linux anyway? My Linux boxes are always like thrash cans - I add and add new stuff until it's time to upgrade either the system or the machine, so it gets wiped and replaced. Never figured out how to delete stuff in a way that will not leave a ton of things behind...


Package managers deal well with files which they installed. What the app creates is your business... and for a good reason. You don't want random package upgrade / conflict to remove your data just because there's something changing about the binary.


The following generally works for me. It removes the installed files, including configuration files. Additionally it will remove any dependencies that were only used by packagename

sudo apt-get purge --auto-remove packagename


>How does one uninstall stuff on Linux anyway?

with a package manager suited for your system, or with the knowledge of what files the software in question touches.


It's a best effort though, you're relying on the package maintainer to have tracked all the installed files and clean up anything they might have modified, which hopefully you or something else hasn't modified after the package was installed.

The packages authored by distributions tend to be pretty good, but I've seen RPMs with nothing more than tar zxf in them and no tracking of the files installed.

The major package managers expect the package to manage the artifacts installed, this works if everyone plays nice and does their job...


You're right that this requires well-maintained packages, and this is one of the strengths of Debian: its policies that require well-maintained packages. If you install a package that's part of Debian, you can count on it being well-maintained, as opposed to one put out by a random web site or developer, which may install but may not integrate well or clean up after itself.


One point I'd like to make is that this is the fault of the developer(s) and not Linux. I've seen this problem in OSX and Windows as well.


Package formats each have their own expectations. RPM likes being pointed to a source tarball, given the steps to build the project, and being given a list of the just-built files to include in the package. Those are the artifacts that RPM will manage (and they can be marked things like "doc", "config", or given specific attributes). Deb is broadly similar, but I feel like the framework is a little less controlled than the RPM one.

The manager basically just defines a framework. Packages can abuse that framework in various ways (I haven't even mentioned install triggers, to execute code when other named packages are added or removed). There are a lot of ways to get it wrong, but when the dev builds a good package, it's almost magic how nicely it works.


I often do this to find out what files were installed where.

    $ touch now
    $ install.sh
    $ find / -newer now
That is supposing you trust install.sh of course.

You can redirect the output of find to a file for later removal. You will have to filter out false positives from the 'find' output, usually dev nodes.


If you're on NixOS, you just remove the package from your configuration file and run the "rebuild" command.


I just don't understand this. I've used Debian and Ubuntu for many years, and I've never had to reinstall a single installation for any reason. I use packages whenever possible, and when installing unpackaged software, I use checkinstall to make a package from it. When I move to a new system or disk, I rsync the filesystem over and boot from the new disk. There is no accumulated cruft; or if there is, it's limited to e.g. tiny config files in /etc from older package versions, which aren't a problem or even noticeable.

The closest to cruft is hidden directories in my homedir from software I don't use anymore. That isn't a problem either: they don't take up much space, and it's easy to manually delete them if I ever feel like it. If not, they are compressed and deduplicated by backup software, and they're hidden by default, so who cares?


I wish I had your luck... A lot of fixes on forums constantly suggest to uninstall first. Which sometimes actually fixes issues. Other times it makes things worse. Either way - buying a lottery ticket soon?


> A lot of fixes on forums constantly suggest to uninstall first.

A lot of people seeking and offering help on forums are former Windows users, who are used to doing that to fix problems. They just don't what else to do, so that's what they do.

And, sure, if you break a configuration file or package dependency, wiping the disk and reinstalling will fix that--but so would fixing the file or dependency. In my experience, the nuclear option is never necessary with Linux. Worst case, you boot off a CD or USB drive, mount the partition, chroot if necessary, fix it, and reboot.

> Other times it makes things worse.

That is very strange, indeed. D:

> buying a lottery ticket soon?

Haha, nope, been having this "luck" for years now, but it only seems to affect my Linux installs. :(


To be honest, I've solved many more problems with Linux by reinstalling (usually a different distribution) than with Windows.

And just yesterday my Linux did what I've never seen an OS do before - it losts it UI clock display. The solution was simple and typical for Linux - it involved killing a random process. But at that moment I realized that we used to laugh that rebooting stuff is the Windows way of doing things. Not anymore, apparently.

;).


> And just yesterday my Linux did what I've never seen an OS do before - it losts it UI clock display. The solution was simple and typical for Linux - it involved killing a random process.

What desktop environment was that? There are many different ones available for Linux systems, and they don't all behave that way. e.g. I've never had that kind of problem with KDE3/TDE or KDE4.


My favorite is still hunting down .lck files with strace because of a bad lock file implementation. Reinstalling definitely fixes that one :). Looking at you, firefox and yum!


Your package manager should have a remove operation.


Open package manager and remove checkbox at a package, then press "Apply".


/usr/local exists for a reason


And /opt for self-contained packages.


> How does one uninstall stuff on Linux anyway? My Linux boxes are always like thrash cans...

That is called sedimentary data storage, ie. piling new files on top of the old ones.


>For me personally, the security aspect isn't even the reason I dislike curl | bash installers. There's no standard behaviour for curl|sh, and it re-introduces many problems that were solved by package managers decades ago.

Every time I've seen it used, curl | bash is simply a way to deal with the fact that the users all have different package managers (often with outdated software) and quite complicated workflows to get stuff set up with them because there were problems that the package manager didn't solve.

>Docker solved this problem.

Ironic. Docker is one of those applications. And they used to ask for it to be installed with a curl pipe.

They seem to have replaced it with what is effectively a script executed with human hands. Not really an improvement.


I was genuinly confused with regards to your last point.

There's a section on their website[0] there are installation instructions for various distributions using repositories and package managers. In other words, the right(tm) way of doing software distribution on Linux.

But then on a different part of their website I found the curl | sh instructions you allude to[1]. Bizarre that they still support that installation method.

To make matters worse, the big orange "Get Started" call to action button actually leads to the curl|sh installation instructions. In light of that I certainly can't blame you for not knowing about the proper distribution packages.

[0] https://docs.docker.com/engine/installation/

[1] https://docs.docker.com/linux/step_one/


That curled script actually installs the 'proper' distribution packages. It has a lot of clever stuff to figure out which environment it's running on and then does whatever commands are necessary - including, e.g., adding keys so that the external repos where docker is hosted are trusted.

Point being that even if you install "the right way" on linux using package managers it's still a multi-step complicated process that benefits from being automated in a bash script.


Thanks, that explains a lot.

It's really a shame that you have to wrap the best practices aproach in a bad practice in order to make it convenient.

> Point being that even if you install "the right way" on linux using package managers it's still a multi-step complicated process that benefits from being automated in a bash script.

Right, because distributions want different things than software authors.

If you're a Debian/Ubuntu/RHEL maintainer, you want stable, tested versions of software that you can vouch for and support for several years.

If you're the maintainer of an application, in this case Docker, you want your users to run the latest and greatest version of your software. You can't just demand that each distro just ships the latest and greatest version of your software. So if you're Docker you end up having to do the following for each distro:

  * package the software for that distro
  * sign that package
  * serve your your own repository containing that package
  * ask your users to add the repository
  * ask your users to trust the signature
And then finally your users can install the package.

That's one of the reasons I like running Arch Linux on my workstation. The desires of the software authors, distro maintainers and users align.

  Docker Inc: "Hey wouldn't it be cool if your users could install the latest
               stable version of Docker?"
  Arch Linux maintainers: "Yeah, it would. We'll keep the latest version in the
                           repo and our users can just `sudo pacman -S docker`"
  Docker Inc: "Cool."
  Arch Linux users: "Cool."


You should try openSUSE Tumbleweed. It actually is faster at releasing new packages than Arch in some cases, and has much more testing and stability from my experience. But it should be noted there are good reasons to want stable software especially with Docker (which is something I actually maintain for SUSE systems) -- there's been a lot of very invasive architectural changes in the last 3 or 4 releases. And all of them cause issues and aren't backwards compatible or correctly handled in terms of how to migrate with minimal downtime.


Arch is far from stable enough to use on servers, hence why it's almost never used on servers.


I don't disagree.

You rarely see curl|sh installers for server stuff though. I think the topic at hand mostly concerns workstations.


FWIW, this is Meteor's justification in /usr/local/bin/meteor:

    # This is the script that we install somewhere in your $PATH (as "meteor")
    # when you run
    #   $ curl https://install.meteor.com/ | sh
    # It's the only file that we install globally on your system; each user of
    # Meteor gets their own personal package and tools repository, called the
    # warehouse (or, for 0.9.0 and newer, the "tropohouse"), in ~/.meteor/. This
    # means that a user can share packages among multiple apps and automatically
    # update to new releases without having to have permissions to write them to
    # anywhere global.


I used to do Linux packaging full-time and constantly hit limitations in what Linux packaging formats support or allow. There's still a lot of progress that could be made in that area, but unfortunately any attempt to improve will likely become that 15th competing standard XKCD refers to...


They could be in ~/.local/share/meteor following "XDG Base Directory Specification" (https://specifications.freedesktop.org/basedir-spec/basedir-... - seems well used on my Kubuntu but it's a case of "pick a standard" I guess).


For dealing with this problem, as well as anything that's not already packaged, the checkinstall utility is very helpful. It intercepts filesystem calls and builds deb/rpm/tgz packages that you install with your distro's package manager. It's been packaged in Debian for a long time. Highly recommended.


I don't think that's nearly as clever a counter-argument against curl|bash as they appear to.

A few points of the shortcomings of curl|bash vs RPM and others:

* curl|bash is not transactional. If you have an install/upgrade process in a different tab, and you close your terminal accidentally, how do you know if the install/upgrade completed?

* There's no obvious uninstall method, and you need to read the docs to see what, if anything, you should remove outside the main directory.

* There's no way of verifying if files have been tampered with post installation. This has negative security and operational consequences.

* It's much more vulnerable to MiTM attacks - a proxy with a successfully faked certificate can trivially modify a bash script on the wire and add a malicious command to it. This is hard for a user to detect. Packages are (on most distros) installed via package managers that verify GPG signing keys, making in-flight modification very very greatly more difficult.

* Packages are much more auditable - I can download a package and inspect it, and know that it will run the same actions on every machine it's deployed to. curl|bash can trivially serve different instructions to different requests, based on whatever secret criteria the server operator decides.

If you're going to try and argue that instead of doing what the generally accepted $good_thing is, and instead use $quick_and_easy solution that's widely perceived to be insecure, when you argue in defence of your position, you need to make very sure that you're not just trying to justify laziness. I don't think sandstorm have really thought about this hard enough.


I think the many discussions about this conflates two things:

1. "curl|bash" vs "curl >install.sh; sh install.sh" 2. install scripts vs package managers

The OP talks about how to detect a streaming execution of a script and exploit it. This proof of concept exploit aims at discouraging the the use of curl|bash as a means to installing code, and instead promotes that at least you should save the script (and review it) (i.e. the "curl >install.sh; sh install.sh").

All of the points you mentioned are valid and important pain points that affect any arbitrary executable install script (not verified cryptographically using a side channel). Deciding whether to use an install script or a packaging system is really a tradeoff.

The "obvious" problem the OP was talking about was that smell you feel when you see piping code to bash. I think OP made an excellent job showing how you can make it harder for people to spot and audit malicious code and how this is pretty serious downside for piping code into bash; It's more like: "I have a gut feeling this is horrible, so let's think about a way to prove that gut feeling", and the proof is insightful and non obvious.


(almost) all of your points are fair, but slightly tangential to their article: they want to put the "curl | bash is _insecure_" myth to rest. They explicitly acknowledge that it is worse than package managers in many ways.

Perhaps they should have said: "however, the following non-security arguments are valid, and there are more we didn't list." Agreed.

But, at the end of the day, they're right: it's not _less secure_ than, e.g., npm.

PS: In your list; "It's much more vulnerable to MiTM attacks" --- that's essentially the PGP vs HTTPS argument in disguise. Which they also cover, and acknowledge to be true; the argument is essentially "curl https:.. | bash is not less secure than any HTTPS based method, including npm, .isos, etc."

EDIT: and because this is such a hot topic, I just want to emphasize: I'm not arguing in favor of curl | bash. This is only a counter-argument to "curl | bash is less secure than other https- based install methods".


Many of my points have a security element to them. Such as inability to verify if files have been tampered with, inability to audit beforehand what the script will do and ease of MiTM attacks.

My point with the PGP argument is that you can indeed go through all those steps to verify their install script with GPG (although note that on top of the steps they run, you should ideally also verify the script against a known hash that you verified elsewhere to make sure that the contents of the script is what you expect, as well as the authors).

But the manual GPG verification process will be ignored by almost all users, who will decide they can't be bothered. My point about package managers is that they do this automatically, without requiring the user to jump through extra hoops. Which is much better than the 8 additional steps that almost no-one will bother with, in practice.

Also saying that it's not less secure than npm isn't a great benchmark - npm is really bad too. See the recent left-pad issues, for example. So yeah - it's not really much worse than other things which are also bad. But it's a lot worse than doing it properly, with a signed RPM repository.


> verify their install script with GPG (although note that on top of the steps they run, you should ideally also verify the script against a known hash that you verified elsewhere to make sure that the contents of the script is what you expect, as well as the authors).

If you've verified the script with GPG, the extra step of a hash is wholly unnecessary. Any change to the script's contents would immediately invalidate the GPG signature.


if you want to audit the script before you run it, you can just save it into a file first and then run it.

The point of this oneliners is to allow anybody to install it. It's insecure as hell, but whether you're piping it directly to bash or downloading an installer file and then executing it (like a great deal if not most of the software is still distributed to the general public anyway) makes not a big difference.

That's why the linked article is interesting; it shows that in fact there is a difference between piping to bash or downloading an installer. The difference is important and concerns the auditability. Any people who cares to audit the script might not notice anything suspicious but people who blindly pipe it to bash can potentially run code that is not the same that audtors see.

What's basically happening is the either a malicious software publisher, or a MiTM, fools auditors. And fooling auditors is bad because a lot of people rely on the community to signal this kind of abuse.

What an auditor could do to detect this kind of malware distribution is:

    curl https://foo/bar.sh | tee >(md5sum)| sh
If she gets a different hash than by fingerprinting a saved download, it's an indication that the server is tampering with the installer.

However, people who follow good practices and want to audit the script are not affected by the problem in the first place, because they will just save the script and they will know how to execute it after having read it.

People who don't know how to audit the installer in the first place might not even know how to verify the download with GPG as well. Or worse, they won't bother and might deem the software to be too hard to install and choose to use something worse.

That's why sandstorm offers PGP-verified installs for those who know what they're doing and show how to use keybase to solve the web of trust issue.

(I'm not affiliated with sandstorm, but they seem a very reasonable and competent bunch to me)


> I don't think sandstorm have really thought about this hard enough.

I suspect we've spent far more time thinking about this than most people. We've heard every argument many times. The referenced blog post specifically addresses the security argument (hence the title) because that's the one we feel is most misleading.

I absolutely agree that curl|bash is ugly compared to package managers, but there are some serious shortcomings of package managers too, and it's a trade-off. With deb or rpm, we are tied to the release schedules of the distros, which is unreasonably slow -- Sandstorm does a release every week whereas most distros don't release any more often than twice a year.

We can perhaps convince people to add our own servers as an apt source or whatever, but now the MITM attack possibility is back. Yes, packages are signed, but you'd have to get the signing key from us, probably over HTTPS. No one wants to do the web of trust thing in our experience (I know because we actually offer this option, as described in the blog post, and sadly, almost no one takes it).

The Sandstorm installer actually goes to great lengths to be portable and avoid creating a mess. It self-containerizes under /opt/sandstorm and doesn't touch anything else on your system. This actually means that Sandstorm is likely to work on every distro (given a new enough kernel, which the installer checks for). If we instead tried to fit into package managers, we'd need to maintain half a dozen different package formats and actually test that they install and work on a dozen distros -- it would get pretty hard to do that for every weekly release. (Incidentally, Sandstorm auto-updates, and it does check signatures when downloading updates.)

So, yeah, it's pretty complicated, and we're constantly re-evaluating whether our current solution is really the best answer, but for now it seems like what we have is the best trade-off.


What happens if your webhost is hacked and someone installs a malicious install.sh? Without a published signature to verify against, there's no way to detect it.


We do provide a signature. But of course if you're going to check the signature, you need to get our public key from somewhere. You can't just get it from our server, because then you have the same problem: if someone hacked our server then they could replace the key file with their own. We publish instructions to actually verify the key here: https://docs.sandstorm.io/en/latest/install/#option-3-pgp-ve... But as you can see, it's complicated, and most people aren't going to do it. If you're not going to go through the whole process, then checking a signature at all is pointless.

Note that distributing Sandstorm as .debs or .rpms wouldn't solve this, because we'd still be distributing them from our own server, and you'd still need to get our key from somewhere to check the signature.


What amuses me is that the people who are vehemently against using a bash script are often the same people who will run

    $ ./configure
    $ make
without a second thought.


Don't forget the most important part:

    sudo make install


If you use checkinstall(1) it'll see what `sudo make install` would do, and create a local package that would do that (e.g. a deb on ubuntu). This package can then be installed in the usual manner, which gives you uninstallability, transactionality, and ensures you don't overwrite any other files installed via a package. This is much better than curl|bash.


Correct me if I'm wrong but can't you also run arbitrary code while installing packages?

Either way I'm not saying `curl | bash` is good, I'm just trying to point out that people have a very knee jerk reaction to it and pay no mind to things which are similarly as dangerous.


Yes, it runs arbitrary code. But checkinstall avoids other downsides from curl|bash


Yes, but not as root!



we can get better compartmentalization than that with things like user namespaces (firejail[1]) or spawning a temporary user id[2]

[1] https://l3net.wordpress.com/projects/firejail/ [2] https://github.com/teran-mckinney/raru


But that means they have to steal the laptop in the two minute window before it goes to sleep.

It's not like your user password is 1234, right?


When I got my last computer, I just copied my home directory over. Funnily enough, my old login sessions worked.


Actually 99% of the Time this will install it to /usr/local


I disagree. Many (probably most) makefiles conform to the GNU Coding Standards, which recommend declaring "all" to be the default make target. `make all` does not install; that's what `make install` is for.

https://www.gnu.org/prep/standards/html_node/Standard-Target...


The idea of the original parent comment on Make is not about well conforming makefiles.

The idea is that if one doesn't trust a random curl-ed install file, they shouldn't trust a downloaded makefile either -- and for the same reasons, yet tons of people complaint about the first, but have been using the latter for decades without complain...


The idea is that the Makefile (and maybe configure?) could just as well as a "curled" install script contain ANY command, even "sudo rm -rf /".

It's not about where it will install it if it works fine.


Recently I stumbled across hashpipe: "hashpipe helps you venture into the unknown. It reads from stdin, checks the hash of the content, and outputs it IF AND ONLY IF it matches the provided hash checksum." [https://github.com/jbenet/hashpipe]


I was going to try it, but it didn't have a curl|bash install instruction.


I know you're being facetious but `go get` is just as convenient and is a controversial installation method for different reasons.


Agreed 100%. Things installed on your machine have privileges regardless of how you install them.

Install software from verified sources.

Whether you use curl or not is irrelevant.


I'd be interested in knowing, how do you assess whether something is a "trusted source" or not?


Binaries and some websites are signed with a verified legal entity.

This is what the Debian guys standing in a circle holding their passwords are doing at your Linux event. It's also why when you visit github it says 'GitHub, Inc [US]' in your browser's address bar. Heads up: I work on the latter.


Sure but SSL identification is a very small part of the process of having a "trusted source".

There is, I'm sure, malicious software hosted on github, so I can download things from that trusted source and still have trouble.

there is insecure software hosted on github, again there's no protection based on hosting company.

If a developer who hosts a repository on github has their credentials compromised, their software may become malicious.

Also as an end-user (particularly a non-paying one) I have no visibility of Githubs own security policies and practices, so assessing the level of trust placed there is tricky.

The debian model has more checks and balances (i.e. it's harder to get from one set of compromised creds to a malicious package in a production repo.) but still not perfect...


> This is what the Debian guys standing in a circle holding their passwords are doing at your Linux event.

Passports, surely? :P


Passports indeed. Sorry, autocorrect.


The same way we trust people. By reputation and familiarity.

If something is from e.g. Red Hat or a large project repository like Apache, it's more trusted than a random website with some unknown tarball...

If many people in websites and forums say "don't trust this program", well, don't trust it.

Etc.


Ahh but that's one of the hardest pieces of the trust puzzle.

On the internet, how do you know who's who. you download a package from npm. Who wrote it, what's their background, where do they work? What are their security practices like, what are the security practices of the hosting company?

so we say Redhat/Apache are more trusted sure, that's one of the reasons that I dislike curl|bash from random sites, there's no way of assessing trust in that software or its origin...


>On the internet, how do you know who's who. you download a package from npm. Who wrote it, what's their background, where do they work? What are their security practices like, what are the security practices of the hosting company?

Check the npm package's ratings, and the github repository watchers and stars.

Try to find and read a few posts (and their comments) on the package. The more respected the source (e.g. HN comments vs comments on Digg), the better.


that's not a bad mechanism (if a bit time consuming) for personal usage, but wouldn't likely work for corp. use..

Even there though all it does is move the problem one step away. Most npm packages have dependencies (and quite a few of them), you have no way of knowing whether the author of the main package did the checks you describe for their dependencies...


>Even there though all it does is move the problem one step away. Most npm packages have dependencies (and quite a few of them), you have no way of knowing whether the author of the main package did the checks you describe for their dependencies...

That's the nice thing: you don't need to. You just need to trust that the main package works well enough -- and you get that from the fact that it is in widespread use (many downloads, watchers, stars, posts, etc).


errrm and if a dependency is untrustworthy it can execute code on your system at install time...

the fact that no-one else is checking and you are therefore in the same boat as other compromised people might not be that much of a consolation


The thing is, if thousands use it, then you'd know if it's unsafe...

Beyond that, yeah, nothing is absolutely certain in life except death and taxes...


in security where breaches can cost a lot of money, relying on heavy use as an indication of trustworthiness may not be a great idea.

For example OpenSSL was and is used by lots of people, this has not stopped it having a lot of security problems...

And true once the vulnerabilities get published you get to know about the problems more quickly but then in a world of targeted attacks and professional exploitation that might not be great comfort..


This assumption fails in the network connected world where a program can download custom instructions after installation.


It doesn't fail, it's still as usable as ever -- just not infallible, which nothing is anyway.

The dynamics of something with a larger user base being more trustworthy don't change because we're on the internet. It's a simple network (no pun intended) effect.

If the program "can download custom instructions after installation" then a large user base still means that users will be bound to find it sooner or later (and make it public/fix it) and it makes it less likely to be you (because you're 1 from 200,000 downloading it, not 1 of 20 or 200).


Every time I see that I just wget the script and inspect it. If it does anything too clever, I don't install it that way. If I can't reason about the shell script something is wrong.

It's like being emailed a .bat file in 1999. Would you blindly run it on your Windows box?! I hope not!


The script downloads a binary onto your system. How would you inspect that?


That article is good, but keybase.io is a non-starter until they open registration. I have been on the pending queue for over a year now.


You don't need a Keybase account to follow the Sandstorm PGP-verified installation instructions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |

Search: