Hacker Newsnew | past | comments | ask | show | jobs | submit | palmdeezy's commentslogin

Jared Palmer, v0 creator/team lead here...

Happy to answer any questions!


Hi Jared, I'm currently working on Commitspark, a headless CMS that uses vanilla GraphQL to define the content model and GitHub for storage and workflows. With this approach I can already today use ChatGPT to not only generate a website content model from a user prompt (e.g. "I want a component-based website with a hero component with title and image, a product component with article number, ...") but also entire web page content from user prompts (e.g. "I have this marketing text here, turn it into content data for a web page using the components in my content model: ...").

The perfect addition here would be to also enable users to generate React components that visualize each content component (I was able to prototype that already with ChatGPT, but I assume what you're doing is more advanced).

Wrap all of this up into a Git repo with Vercel deployment workflow for GitHub and it should be possible to go from prompt to CMS-driven website in a really short time.

So, my question: Would it be possible to get access to V0? Also, feel free to reach out if you see potential for collaboration.


What is the business problem you are trying to solve with this?


v0 saves loads of time by allowing you to quickly generate UIs with React via Shadcn UI (https://ui.shadcn.com) with simple text prompts. While full scale design tools are very useful, not every UI in your app needs that level of fidelity. Furthermore, a lot of UIs inside of apps and websites are already extremely programmatic (such as forms, tables, modals, etc.). The goal with v0 is to get you started (hence the name) faster....to give you something you can copy and paste and then modify yourself.


Thanks for taking the time to answer questions. Maybe it was mentioned in an earlier article but it' not in this one; is this a proprietary model?


How does one get off the waitlist?

My vercel login is with github, same username as here.


Hi Jared.

Awesome use of LLM. Do you position V0 as helper for developers or designers?


v0 is really for everyone. We hope it makes everyone more productive and creative...reducing the cost of iteration and experimentation.


Are there any plans to support Svelte in the near future?


Jared Palmer here... v0 Creator/Team Lead.

Tailwind is indeed required at the moment. However, we're working on supporting other popular design systems as well as custom ones.


Hey Jared - that would be great, even better if it's framework agnostic. Nice work though!


We're investigating this now. Early experiments are very promising.


Hi everyone! Jared Palmer (https://x.com/jaredpalmer) here from the v0 team and Vercel. Happy to answer any questions. The team is very excited to finally share what we've been working on with the community. We know it's early, but as the name implies... it's v0.

Link: https://v0.dev | FAQ: https://v0.dev/faq


If you want a good response from HN, it would help to let us past the waitlist. Right now it's just a gallery for existing generations.


Can you talk at all about the tech that goes into making something like this? Or about the problems you faced building a system for production?


Looks interesting, are everyone's generations made public though? A private option would be nice.


Yes! This is something we'll be adding soon!


Hola! Y'all can play with LLama 2 for free and compare it side by side to over 20 other models on the Vercel AI SDK playground.

Side-by-side comparison of LLama 2, Claude 2, GPT-3.5-turbo and GPT: https://sdk.vercel.ai/s/EkDy2iN


Investigating now. Thanks for the feedback!


Fixed Cohere and Replicate. Will add the other provider now! Appreciate the help!


Thanks for the fixes. Sounds great, looking forward to it!


Correct. We are using various hosting providers.

As part of the project, I’ve been working with providers and hosts on updating their SDKs to work on Vercel Edge Functions (and streaming).


Thanks man! Good idea on token/s. Will add


Would be nice to know cost too!

tokmon is on showHN today too

https://news.ycombinator.com/item?id=35616871


I'm super excited in graphing this over time. Will be interesting to see how the providers develop over the coming months. FWIW we also need TTFB not just tokens per second. The first token is key for UX


Engineering Director of Frameworks at Vercel here…

It works automatically when you use the `app` directory in 13.2+. No additional steps are needed when you self-host/run on Node.js


Does this have any ability to cache GraphQL requests, or is there a recommended strategy for doing so?


Is there an API for it because I can't find anything in the docs?.


Turborepo author here…

We do not invalidate the whole graph anymore for a lockfile change. We now have a sophisticated parser that can calculate if a change within the lockfile should actually alter the hash of a given target. In addition to higher cache hit rates, this is what powers our `turbo prune` command which allows teams to create slices of their monorepo for a target and its dependencies…useful for those building in Docker.

Prune docs: https://turbo.build/repo/docs/reference/command-line-referen...

Turborepo is much more scalable now than when we spoke pre-Vercel acquisition. It now powers the core web codebases at Netflix, Snap, Disney Streaming, Hearst, Plex and thousands of other high-performance teams. You can see a full list of users here: https://turbo.build/showcase

Would be happy to reconnect about Uber’s web monorepo sometime.


Can Prune be used to build a bundle (as in a zip) for, say AWS Lambda, which includes only the dependencies (and not dev dependencies)? I've played around with pnpm's deploy but it felt a bit lackluster. Especially talking about situation where one has a backend package and some shared package. The bundle should contain all dependencies (but not dev dependencies) of the backend package and shared package and of course the built shared package should also be include in the bundle's node_modules.


You can do that with esbuild. Just bundle the handler entry into a single file and esbuild will tree shake the cruft out. That's the approach taken by AWS CDK as well.


Yeah, that always works, of course, even though you might want to externalize certain dependencies still. In this case I don't want to bundle the code (although I might end up doing that, anyway)


You can give esbuild a list of dependencies that you want to be external and it won’t bundle them, I’ve done it with the AWS SDK and it works as expected.


Yes! Prune then zip the output folder.


Kind of. Poster asked if you can prune only deps and exclude dev deps. That's currently unsupported: https://github.com/vercel/turbo/issues/1100


That's correct. However, I've tried to use prune now, not sure if I am using it correctly, but that's why I do:

I build my packages regularly (not pruned), then I prune with scope "backend". Apparently the pruned directory contains node_modules with empty packages, not sure what reason is for that; I just ignored it. In the resulting directory I then run `pnpm install --prod`. Only the regular dependencies will be installed. I think this is enough for my usecase. I am not sure if prune is supposed to be used for this approach though.


Turborepo author here...

> * tasks on the root package (e.g. tsc -b that typechecks all packages)

We are working on this as we speak! The first step is to add the ability to restrict hashing `inputs`[1] to the Turborepo `pipeline`. After that we are going to be adding root task running in the next minor release.

However, as your monorepo grows, you will likely want to move away from running tasks like tsc from the root and instead run them on a per-package basis. The reason is that tools like Bazel, Buck, Turborepo, etc. can become more incremental (and thus faster) as your dependency/task graph becomes more granular (as long as you maintain or reduce the average affected blast radius of a given change). The other argument against root tasks is that they break hermeticity and encapsulation of the package abstraction. That being said, root tasks are very useful for fast migration to Turborepo and also for smaller repos. Futhermore, we're happy to tradeoff academic purity for productivity with features like this.

> treat tasks such as lint:eslint, lint:pretter as a single task lint (or maybe `lint:*`)

You can run multiple tasks at the same time and Turborepo will efficiently schedule them at max concurrency.

turbo run eslint prettier --filter=@acme/...

However, it sounds like you like to see glob fan out of tasks. This is a really cool idea. I created a GitHub issue for it here [2] if you'd like to follow along.

[1]: https://github.com/vercel/turborepo/pull/951

[2]: https://github.com/vercel/turborepo/issues/1029.


> We are working on this as we speak!

Perfect!

> However, it sounds like you like to see glob fan out of tasks.

Yes, the idea here being that I don't want to list all similar tasks (such as linting) explicitly in the turborepo config. Teams should be free to add any additional lint task if they think it's useful for them (and possibly only for them). Similar to Maven(Java) where additional goals can be bound to the standard lifecycle phases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: