will give you a git repo without a working set (just the contents typically in the .git directory). This allows you to create things like `foo.git` instead of `foo/.git`.
“origin” is also just the default name for the cloned remote. It could be called anything, and you can have as many remotes as you’d like. You can even namespace where you push back to the same remotes by changing fetch and push paths. At one company it was common to push back to `$user/$feature` to avoid polluting the root namespace with personal branches. It was also common to have `backup/$user` for pushing having a backup of an entire local repo.
I often add a hostname namespace when I’m working from multiple hosts and then push between them directly to another instead of going back to a central server.
For a small static site repo that has documents and server config, I have a remote like:
So I can push from my computer directly to that server, but those branches won’t overwrite the server’s branches. It acts like a reverse `git pull`, which can be useful for firewalls and other situations where my laptop wouldn’t be routable.
You can of course also run a local site generator here as well, although for dev3 I took a lighter-weight approach — I just checked in the HEADER.html file that Apache FancyIndexing defaults to including above the file directory listing and tweaked some content-types in the .htaccess file.
This could still fail to update the checkout if it has local changes, but only if they create a merge conflict, and it won't fail to update the bare repo, which is probably what your other checkouts are cloned from and therefore where they'll be pulling from.
Cannot emphasize this whole notion enough; Very roughly, Github is to git what gmail is to email.
It's mostly probably fine if that's the thing most of everybody wants to use and it works well; but also it's very unwise to forget that the point was NEVER to have a deeply centralized thing -- and that idea is BUILT into the very structure of all of it.
And it didn’t work. You have to ssh to the remote server and “git init” on a path first. How uncivilized.
Bitkeeper and a few other contemporaries would let you just push to a remote path that doesn’t exist yet and it’d create it. Maybe git added this since then, but at the time it seemed like a huge omission to me.
If you want a public facing "read only" ui to public repositories you can use cgit (https://git.zx2c4.com/cgit/about/) to expose them. That will enable others to git clone without using ssh.
I keep my private repositories private and expose a few public ones using cgit.
The more I use git, the more I discover more depth to it.
So many features and concepts; it's easy to think you understand the basics, but you need to dig deep into it's origin and rationale to begin to grasp the way of thinking it is built around.
And the API surface area is much larger than one would think, like an iceberg
So I find it really weirdly low level in a way. Probably what is needed is is a higher-level CLI to use it in the most sensible, default way, because certainly the mental model most people use it with is inadequate.
My favorite git trick is using etckeeper & git subtree to manage multiple machines from a single repo . A single git repo can “fan out” to dozens of instances . It’s useful even for “managed” hosts with terraform because etckeeper snapshots config with every change, catching bugs in the terraform config.
During the dev / compile / test flow, git makes a lightweight CI that reduces the exposure to your primary repo . Just run `watch -n 60 make` on the target and push using `git push` . The target can run builds without having any access to your primary (GitHub) repo.
> It’s also backed up by default: If the server breaks, I’ve still got the copy on my laptop, and if my laptop breaks, I can download everything from the server.
This is true, but I do also like having backups that are entirely decoupled from my own infrastructure. GitHub personal private accounts are free and I believe they store your data redundantly in more than one region.
I imagine there's a way to setup a hook on your own server such that any pushes are then pushed to a GitHub copy without you having to do anything else yourself.
Some time ago, I was on a team of researchers collaborating with a hospital to build some ML models for them. I joined the project somewhat late. There was a big fuss over the fact that the hospital servers were not connected to the internet, so the researchers couldn't use GitHub, so they had been stalled for months. I told them that before GitHub there was `git`, and it is already on the servers... I "set up" a git system for them.
In interviews, I've literally asked senior devops engineers and senior software engineers if they have hosted their own git servers and how to initialise one and not a single one has mentioned git init --bare..... which is disconcerting. They can deploy appliances (like gitlab, gitea) and build pipelines just fine, but none of them realized how git actually works underneath and how simple it all is.
Back when I started at my current job... 15 years ago... We had no central git server. No PR workflow. We just git pull-ed from each others machines. It worked better than you would expect.
We then went to using a central bare repo on a shared server, to hosted gitlab(? I think - it was Ruby and broke constantly) eventually landing on GitHub
Self hosting Git isn’t just geek freedom, it’s a mindset of redundancy. I’ve had repos vanish due to platform bans that’s when it really hit me that “distributed” isn’t just a design, it’s a reminder of responsibility.
As a git user "not by choice" (my preference going for mercurial every single day), I never understood why git needs this distinction between bare/non-bare (or commit vs staging for that matter). Seems like yet another leaky abstraction/bad design choice.
So my friend and I were trying to learn some coding together. We had our laptops on the same wifi and I wanted us to use git without depending on GitHub, but I was completely stumped as to how to actually connect us together. I don't want us to setup SSH servers on each other's laptops, giving each other full access to our computers, and sending patches to each other across the living room seems overkill when we're just sitting around hacking away on our keyboards wanting to share what we've come up with every few minutes or so. I still have no idea how I would solve this. Maybe I'd just try coding with a syncthing directory shared between us, but then that's totally giving up on git
The proper way to do this to make a "bare" clone on the server (a clone without a checked out branch). I was doing this in 2010 before I even signed up to GitHub.
I suspect many who always use git remotely don't know that you can easily work with local repositories as well using the file protocol, git clone file:///path/to/repository will create a clone of the repository at the path.
I used that all the time when I had to move private repositories back and forth from work, without ssh access.
Git post-update hooks to do deployment FTW. I looked into the whole push-to-github to kick off CI and deployment; but someone mentioned the idea of pushing over ssh to a repo on your server and having a post-update hook do the deployment, and that turned out to be much simpler.
There was a brief period when Google Cloud had support for hosting git on a pay-per-use basis. (I think it was called Google Cloud Repositories.) It had a clunky but usable UI.
I really preferred the idea of just paying for what I used -- rather than being on a "freemium" model with GitHub.
But -- as many things with Google -- it was shutdown. Probably because most other people do prefer the freemium model.
I wonder if this kind of thing will come back in style someday, or if we are stuck with freemium/pro "tiers" for everything.
One note, xcode and maybe some other clients can't use http "dumb mode". Smart mode is not hard to set up, but it's a few lines of server config more than this hook.
TIL about the update options for checked out branch. In practise though usually you want just the .git "bare" folder on server
this article is very bad advice. this way things are extremely brittle and there's a reason all those settings are disabled by default. you will lose data, save from very specific use cases
the vastly superior way is 'git bare' which is a first class supported command without hacky settings.
This is definitely nice but it doesn’t really support the full range of features of Git, because for example submodules cannot be local references. It’s really just easier to set up gitolite and use that in almost the same exact way but it’s much better.
I do something similar. I create a bare repo on my dropbox folder or nas mount. Then checkout from bare repo file path to some place where I will be doing all the work.
What is it with code on blog posts where fonts have uneven size and/or font types (e.g. italics mixed in with regular)? I see this from time to time and I wonder if it’s intentional.
- a prod server ( and a test server) with a git repo.
- a local machine with the git repo.
- a git server where code officially lives, nowadays just a github.
If I were to simplify and run my own git server of the third kind, I would probably not run a server for the sole purpose of hosting code, it would most likely run on the prod/test server.
So essentially I would be eliminating one node and simplifying. I don't know, maybe there's merits to having an official place for code to be in. Even if just semantics.
I know you can also use branches to have a "master" branch with code and then have migrations just be merging from master into a prod branch, but then again, I could have just master branches, but if it's on the test server then it's the test branch.
I don't know if spendint time reinventing git workflows is a very efficient use of brain juice though.
You already have a Git server
(maurycyz.com)634 points by chmaynard 26 October 2025 | 422 comments
Comments
“origin” is also just the default name for the cloned remote. It could be called anything, and you can have as many remotes as you’d like. You can even namespace where you push back to the same remotes by changing fetch and push paths. At one company it was common to push back to `$user/$feature` to avoid polluting the root namespace with personal branches. It was also common to have `backup/$user` for pushing having a backup of an entire local repo.
I often add a hostname namespace when I’m working from multiple hosts and then push between them directly to another instead of going back to a central server.
For a small static site repo that has documents and server config, I have a remote like:
So I can push from my computer directly to that server, but those branches won’t overwrite the server’s branches. It acts like a reverse `git pull`, which can be useful for firewalls and other situations where my laptop wouldn’t be routable.This could still fail to update the checkout if it has local changes, but only if they create a merge conflict, and it won't fail to update the bare repo, which is probably what your other checkouts are cloned from and therefore where they'll be pulling from.
It's mostly probably fine if that's the thing most of everybody wants to use and it works well; but also it's very unwise to forget that the point was NEVER to have a deeply centralized thing -- and that idea is BUILT into the very structure of all of it.
Tip: create a `git` user on the server and set its shell to `git-shell`. E.g.:
You might also want to restrict its directory and command access in the sshd config for extra security.Then, when you need to create a new repository you run:
And use it like so: Or: This has the exact same UX as any code forge.I think that initializing a bare repository avoids the workarounds for pushing to a currently checked out branch.
Bitkeeper and a few other contemporaries would let you just push to a remote path that doesn’t exist yet and it’d create it. Maybe git added this since then, but at the time it seemed like a huge omission to me.
If you want a public facing "read only" ui to public repositories you can use cgit (https://git.zx2c4.com/cgit/about/) to expose them. That will enable others to git clone without using ssh.
I keep my private repositories private and expose a few public ones using cgit.
So many features and concepts; it's easy to think you understand the basics, but you need to dig deep into it's origin and rationale to begin to grasp the way of thinking it is built around.
And the API surface area is much larger than one would think, like an iceberg
So I find it really weirdly low level in a way. Probably what is needed is is a higher-level CLI to use it in the most sensible, default way, because certainly the mental model most people use it with is inadequate.
During the dev / compile / test flow, git makes a lightweight CI that reduces the exposure to your primary repo . Just run `watch -n 60 make` on the target and push using `git push` . The target can run builds without having any access to your primary (GitHub) repo.
This is true, but I do also like having backups that are entirely decoupled from my own infrastructure. GitHub personal private accounts are free and I believe they store your data redundantly in more than one region.
I imagine there's a way to setup a hook on your own server such that any pushes are then pushed to a GitHub copy without you having to do anything else yourself.
I wrote a HOWTO a few weeks ago: http://mikhailian.mova.org/node/305
We then went to using a central bare repo on a shared server, to hosted gitlab(? I think - it was Ruby and broke constantly) eventually landing on GitHub
I got pwned this way before (by a pentester fortunately). I had to configure Apache to block the .git directory.
I used that all the time when I had to move private repositories back and forth from work, without ssh access.
I really preferred the idea of just paying for what I used -- rather than being on a "freemium" model with GitHub.
But -- as many things with Google -- it was shutdown. Probably because most other people do prefer the freemium model.
I wonder if this kind of thing will come back in style someday, or if we are stuck with freemium/pro "tiers" for everything.
TIL about the update options for checked out branch. In practise though usually you want just the .git "bare" folder on server
the vastly superior way is 'git bare' which is a first class supported command without hacky settings.
https://git-scm.com/docs/git-clone#_git_urls
https://git-scm.com/docs/git-init
It'd be great if there was more specific support. But in practice? No problems so far.
$ git push heroku master
So you'll have to make sure to push to e.g. GitHub as well for version control.
TIL that people use non-bare git.
Until you have more users than dollars that's all you need.
- a prod server ( and a test server) with a git repo.
- a local machine with the git repo.
- a git server where code officially lives, nowadays just a github.
If I were to simplify and run my own git server of the third kind, I would probably not run a server for the sole purpose of hosting code, it would most likely run on the prod/test server.
So essentially I would be eliminating one node and simplifying. I don't know, maybe there's merits to having an official place for code to be in. Even if just semantics.
I know you can also use branches to have a "master" branch with code and then have migrations just be merging from master into a prod branch, but then again, I could have just master branches, but if it's on the test server then it's the test branch.
I don't know if spendint time reinventing git workflows is a very efficient use of brain juice though.
Why is GitHub popular? its not because people are "dumb" as others think.
Its because GitHub "Just Works".
You don't need obscure tribal knowledge like seba_dos1 suggests [0] or this comment https://news.ycombinator.com/item?id=45711294
The official Git documentation for example has its own documentation that I failed to get work. (it is vastly different from what OP is suggesting)
The problem with software development is that not knowing such "tribal knowledge" is considered incompetence.
People don't need to deal with obscure error messages which is why they choose GitHub & why Github won.
Like the adge goes, "Technology is best when it is invisible"
[0] https://news.ycombinator.com/item?id=45711236
[1] https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-...