Zed now predicts your next edit with Zeta, our new open model

(zed.dev)

Comments

lionkor 14 February 2025
> Edit prediction won't be free forever, but right now we're just excited to share and learn.

I love Zed, and I'm happy to pay for AI stuff, but I won't be using this until they are done with their rug pull. Once I know how much it costs, I can decide if I want to try integrating it into my workflow. Only THEN will I want to try it, and would be interested in a limited free trial, even just 24 hours.

Considering I've seen products like this range from free to hundreds of dollars per month, I'd rather not find out how good it is and then find out I can't afford it.

Other than that for anyone wanting to try Zed:

- You can only run one LSP per file type, so your Rust will work fine, your C++, too, your Angular will not.

- Remote editing does not work on Windows (its not implemented at all), so if you are on windows, you cannot ssh into anything with the editor remote editing feature. This means you cannot use your PC as a thin client to the actual chunky big work machine like you can with vscode. I've seen a PR that adds windows ssh support, but it looked very stale.

mihaaly 14 February 2025
The sensitive readers please be advised, quite a bit of a rant and angry reactions coming in an overreacting style, please stop here if you are of the sensitive type. The comments are unrelated to this particular product but aimed at the universal approach of the broad topic nowadays. Zero intent of offending anyone specific person is attempted.

I am fed up with all these predicting what I want to do. Badly!! Don't guess! Wait, and I will do what I want to do. I do not appreciate it from my wife trying to figure out what I want to say in the middle of my senntence and interrupts before I finish what I am saying, imagine how much I tolerate it from a f computer! I know what I am going to do, you do not! Let me do that already! This level of predicting our asses off everywhere grown to be a f nuisance by now, I cannot simply do and focus on what I want to do because of the many distractions and suggestions and guesses and prediction of me and my actions all the f time are in the way' Wait, and see! At this overly eager level and pushing into everything is a nuisance now! Too many times the acceptance of the - wrong - 'helping suggestion' is in the way too, hijacking that usable elsewhere particular keyboard action, breaking my flow, dragging in the unwanted stupid guess! Recovery of my way of working from incoming and pushy "feature" hiding/colliding my usual actions, forced on me in a "security update" or other bullshit, turning off and recover the working practice already been in place and worked is an unwelcom being in the way too, ruined, now colliding with "smart prediction", not helping. In long term, it is not a definitive help but an around zero sum game. Locally, in specific sittuations, too many times it is a strong negative by the wrong done! Too much problems here and there, accuracy and implementation wise. Forced everywhere. Don't be a smartass, you are just an algorithm not a mind reader! Lay back and listen.

If prediction is that smart - being with us since the turn of the millania here and there - then should do my job perfectly and I can go walk outside and collect the money! Until, f off!

yellow_lead 14 February 2025
Seems like you can't run it locally. I don't like my code being sent to a third party, especially when my employer may not agree with it.

I also edit secret/env files in my IDE, so for instance, a private key or API key could get sent, right?

I hope there will be a local option later.

thomascountz 14 February 2025
If you were looking for the configuration like I was[1][2]:

  {
    "show_edit_predictions": <true|false>,
    "edit_predictions": {
      "disabled_globs": [<globs>],
      "mode": <"eager_preview"|"auto">
    },
    "features": {
      "edit_prediction_provider": <"copilot"|"supermaven"|"zed"|"none">
    }
  }

[1]: https://zed.dev/docs/completions

[2]: https://zed.dev/docs/configuring-zed#edit-predictions

mikebelanger 14 February 2025
I've been using Zed for a few months now. One thing I really like about Zed is its relatively discrete advertising of new features, like this edit prediction one. Its just a banner shown in the upper-left, and it doesn't block me from doing other stuff, or force me to click "Got it" before using the application more.

This definitely counters the trend of putting speech balloons/modals/other nonsense that force a user to confirm a new feature. Good job, Zed team!

boxed 14 February 2025
I tried CoPilot a while and my biggest gripe was tab for accepting the suggestion. I very often got a ton of AI garbage when I was just trying to indent some code.

Tab just doesn't seem like the proper interface for something like this.

dgacmu 14 February 2025
As a slight tangent, this prompted me to wonder about one of the things I _haven't_ enjoyed in my last two weeks of experimenting with zed: It tries to autocomplete comments for me. Hands off - that's where I think!

Fortunately, zed somewhat recently added options to disable these:

   "edit_predictions_disabled_in": ["comment"],
   "inline_completions_disabled_in": ["comment"]
My life with zed just got a little better. If I switch back to vscode I'll have to figure out the same setting there. :-)
dakiol 14 February 2025
Am I the only one who prefers stability instead of a constant rush of features in their text editors/IDEs? If it’s AI related I like them even less. I know I can stick forever with Vim, but damn, I tried Zed and it felt good.
elashri 14 February 2025
It seem that someone already published different quanta versions of the model [1] . This can be used to define Modelfile to use with ollama locally. But I am not sure that zed allows changing the endpoint of this feature yet (ever). Of course it is opeb source and you can change it but then you will need to build it.

[1] https://huggingface.co/mradermacher/zeta-GGUF

markus_zhang 14 February 2025
I think the modern Intellisense has the right amount of prediction - offloads enough brain activity without completely relying on something else.

AI prediction feels way too much and way too eager to give me something. I don't know about you guys, but programming is an exercise for me, not just to make it work and call it a day.

However, AI would be useful if it can offer program structural and pattern recommendations. One big problem I now face, and I believe all hobbyists face too, is that when the program grows larger, it is becoming increasingly difficult to make it well structured and easy to expand -- on the other hand, pre-mature architecturing is also an issue. Reading other people's source is not particularly useful, because 1) You don't know whether it is suitable or even well written, and 2) Usually it is too tough to read other people's source code.

coder543 14 February 2025
Two immediate issues that I noticed:

1. If I make a change, then undo, so that change was never made, it still seems to be in the edit history passed to the model, so the model is interested in predicting that change again. This felt too aggressive... maybe the very last edit should be forgotten if it is immediately undone. Maybe only edits that exist against the git diff should be kept... but perhaps that is too limiting.

2. It doesn't seem like the model is getting enough context. The editor would ideally be supplying the model with type hints for the variables in the current context, and based on those type hints being put into the context, it would also pull in some type definitions. (I was testing this on a Go project.) As it is, the model was clearly doing the best it could with the information available, but it needed to be given more information. Related, I wonder if the prediction could be performed in a loop. When the model suggests some code, the editor could "apply" that change so that the language server can see it, and if the language server finds an error in the prediction, the model could be given the error and asked to make another prediction.

keyle 14 February 2025
Good, charge for Zed and secure its future.

I'm becoming more and more wanting to use Zed every day, and shifting away from other editors whenever possible. Some LSP implementations are lacking... But it's getting damn close!

I love the new release every week. Zed is my recent love, and Ghostty which is also stellar.

Hanging by a thread for some sort of lldb/gdb integration with breakpoints and inspection! Hopefully some day, without becoming a bag of turd.

zeta0134 14 February 2025
I have mixed feelings about the name of this model. :P Though I suppose it's my own fault, naming myself after a letter.

I'm admittedly a bit surprised that there's a free/paid scheme for a 7b model though, as those are small enough to run locally. I suppose revenue streams are enticing and I can't fault the company for wanting to make money, but I'm also 100% against remote models for privacy reasons, making this a bit of a nonstarter for me. Depending on how heavily integrated this is, the mere presence of a remote-first prediction engine sorta turns me off the idea of the editor as a whole. If there were the option to run the model 100% local (sans internet) then I'd be more interested.

cameroncooper 14 February 2025
If the model is open source, I'm hoping for an option to be able to run this feature locally for free. They seem to have support for running other models locally (e.g. deepseek-r1 through Ollama), so I'm hoping they will keep that up with edit prediction.
minzi 14 February 2025
Still no debugger. I know there is a branch open, but it’s surprising to me that there isn’t a more concentrated effort on getting that over the line. Major props to the folks working on it. I just wish they had more resources and help getting it done.
coder543 14 February 2025
If anyone is interested, the release of Zeta inspired me to write up a blog post this afternoon about LLM tab completions past and future: https://news.ycombinator.com/item?id=43053094
flkiwi 14 February 2025
I'm not a developer, but I use Zed for a lot of things that would be ripe for "AI" application in the current bubble. I, however, have exactly zero use for AI in those cases, and will reconsider any application that pushes AI. It's both that I do not want to use AI features but also (a) I am prohibited from doing so and (b) the focus on deploying AI solutions raises serious concerns about a product's focus on and support of their core features. All of which is to say that Zed's AI features would be more valuable to me, and would drive quite a lot of good will, if they were an entirely removable module. No upsell notices, no suggested uses, just a complete absence of the functionality at the user's choice (like, say, an LSP).
NoboruWataya 14 February 2025
I haven't used Zed much but RustRover seems to have recently switched to a more aggressive/ambitious autocomplete. IIRC tab used to just complete the current word, now it tries to complete the rest of the line. Only it usually gets it wrong. Enter now seems to do what tab used to do and it's been quite annoying having to unlearn tab completing everything.

Maybe Zed's prediction is better (though to be honest I don't really care to find out). But I feel like autocomplete is something where usefulness drops off very quickly as the amount of predicted text increases. The thing is, it really has to be 100% correct, because correcting your mostly-correct auto-generated code seems more tedious and frustrating to me than just typing the correct code in the first place.

fau 14 February 2025
It's hard to care about AI features when a year later I still can't even get acceptable font rendering: https://github.com/zed-industries/zed/issues/7992
ramon156 14 February 2025
> zeta won't be free forever

Well that's a bummer, but also very understandable. I hope they don't make the hop too early, because I still want to grow into Zed before throw my wallet at them. So far it's very promising!

1f60c 14 February 2025
I wonder what this means for Support using ollama as an inline_completion_provider https://github.com/zed-industries/zed/issues/15968. ':]

I hadn't heard of Baseten before (it seems to be in a hot niche along with Together.ai, OpenRouter, etc.) but I'm glad I did because I was actually noodling on something similar and now I don't have to do that anymore (though it did teach me a lot about Fly.io!). Yay economies of scale!

greener_grass 14 February 2025
When developing something I tend to have lots of programs open:

- The editor

- Several terminal windows

- Some docs

- GitHub PRs

- AWS console

- Admin tools like PgAdmin

- Teams, Slack, etc.

When screen-sharing with Zed, do I only get to share the editor?

Because (clunky as they are) video call apps let me share everything and this is table-stakes for collaboration.

gonational 14 February 2025
Zed is putting so much focus into AI that their editor is falling apart:

https://news.ycombinator.com/item?id=43041923

sarosh 14 February 2025
Interesting that the underlying model, a LoRA fine-tune of Qwen2.5-Coder-32B, relies on synthetic data from Claude[1]:

  But we had a classic chicken-and-egg problem—we needed data to train the model, but we didn't have any real examples yet. So we started by having Claude generate about 50 synthetic examples that we added to our dataset. We then used that initial fine-tune to ship an early version of Zeta behind a feature flag and started collecting examples from our own team's usage.

  ...

  This approach let us quickly build up a solid dataset of around 400 high-quality examples, which improved the model a lot!
I checked the training set, but couldn't quickly identify which were 'Claude' produced[2]. Would be interesting to see them distinguished out.

[1]: https://zed.dev/blog/edit-prediction [2]: https://huggingface.co/datasets/zed-industries/zeta

idnty 14 February 2025
I like Zed as an editor and how they’ve integrated LLMs and supports variety.

But it’s mind boggling they still don’t have a basic file diff tool. Just why?

fultonb 14 February 2025
I got beta access to this and love it. It is much more useful than copilot by itself and very useful for an edit that is a little repetitive. I wish I could run the model locally and given that is open source and they have support for Ollama and other OSS tools, I feel like that would be an amazing feature.
aqueueaqueue 14 February 2025
Ah that's a model that can run on a shitty old PC right. Like the idea of tools being local again.
paradite 14 February 2025
DeepSeek also has a FIM (Fill In the Middle) completion model via API, if anyone is interested to try out:

https://api-docs.deepseek.com/guides/fim_completion

notsylver 14 February 2025
This looks a lot more impressive than a lot of GitHub Copilot alternatives I've seen. I wonder how hard it would be to port this to vscode - using remote models for inline completion always seemed wrong to me, especially with server latency and network issues
Alifatisk 14 February 2025
Is there any plans for Zed to add basic functionality like task runner? A button to run / debug code? Only having autocomplete for Java code gives the impression that Zed is only a text editor and not an IDE.
tombh 14 February 2025
I really want to like Zed, and their AI may actually be useful. But when I hear things like "new open model" I can only associate it with hype, which is more often about pleasing investors, not end users.
dankobgd 14 February 2025
ai is boooring, they should fix the core features before they add useless ai
lordnacho 14 February 2025
Is there a thing that does this for the terminal? I hate it when I'm fiddling with some complicated command and I have to juggle the flags as well as my personal inputs like paths and such.
bionhoward 15 February 2025
My thoughts were, what are the terms, are you training on our code, are we able to use completions to compete with you?
gardenhedge 14 February 2025
Personally, before I hit tab to confirm a change, I would want to see the before and after rather than just the after
returnInfinity 14 February 2025
Future CPUs must be able to run this model locally. This is the way. I have spoken.
rw_panic0_0 14 February 2025
nice to see they open sourced the model, and it seems relatively small so you can run it locally! Also, please change the video preview, it's hard to see the feature itself, not really obvious what is shown
lubitelpospat 14 February 2025
Dear Zed devs - please, fix the bug with the "Rename symbol" functionality! Refactoring is an important feature that many of your users need to have to start using Zed as their main daily driver. Otherwise - great IDE! Please, help me forever forget the VSCode nightmare!
walthamstow 14 February 2025
Of all the AI aids, autocomplete is my least favourite, at least from my experience with Cursor anyway.

It takes me longer to review the autocomplete (I ain't yoloing it in) than it would have done to type the damn thing out. Loving Cursor's cmd+k workflow though, very productive.

vednig 14 February 2025
why can't this be a script, as in the old days, it looks like overkill, if you compare change vs compute/efforts, it would be nice to see it evolve though
daft_pink 14 February 2025
Is it on device? Github code completion works so well.
shinryuu 14 February 2025
I'm curious how they plan to fund the company?
gyre007 14 February 2025
Please add vim leader support to vim mode! :)
freehorse 14 February 2025
can we run it locally to get autocomplete?
vijaybritto 14 February 2025
There begins the downfall. Any great product who jumps on a hype train always ends up crashing
billwear 14 February 2025
hmm. company mode has been doing part of this job for a long time now.
littlestymaar 14 February 2025
> A few weeks out from launch, we ran a brief competitive process, and we ended up being really impressed with Baseten.

What? I really fail to see how it can make sense for a company like Zed: Baseten bills by the minute so it can be really useful if you need to handle small bursts of compute, but on the flip side they charge you a x5 premium if you end up being billed for complete hours…

Verlyn139 14 February 2025
that website is one of the most unresponsive one in a while
deagle50 14 February 2025
Does the mouse cursor still not hide while typing in Zed?
ekvintroj 14 February 2025
What a scam is this AI stuff.
rs186 14 February 2025
As usual, AI has a higher priority than working on very basic stuff like creating a Windows build.

If you guys want to compete with VSCode, think again.