Hey HN, last week I've been building TattoosAI as a little learning project to get comfortable with Stable Diffusion & DALL-E. I'm absolutely shocked by how powerful SD is... Just like how GPT-3 helped copywriters/marketing be more effective, SD/DALL-E is going to be a game changer for artist!
Giving up my email for a try is a no-go from me, but it's definitely a super interesting concept and, at least from the quick video, I could see this becoming fairly popular if the output is reasonable (and probably an adjustment on pricing -- I would not pay for it as is).
I think it would be useful for tattoo artists as well. I'd be totally on-board if my artist whipped something along these lines out on their tablet during a consult, we browsed through some designs based on my inputs, and then have my artist modify/add their personal touch.
I REALLY wanted to allow testing without email. The issue is that every trial costs me money. Not "paywalling" it with a login would mean people can just generate unlimited times.Bad
payoff but had to :/
Agreed with the artist part. My previous startup (headlime.com, acquired by Jasper.ai), was mostly used by copywriters instead of non savvy folks. I genuinely believe that text2image AI will help artists, not hurt them.
Using the SD API is great for a POC, but to scale this and take full advantage of the cost benefits, you need to spin up a cloud GPU.
Heck - you can even build an off the shelf PC and pull workloads from the cloud, and then upload the resulting images. That would work well within your traffic needs. Maybe the upfront cost isn't worth it to you, but if you already have the components ...
(I've done this myself. I run a dual cloud / on-prem 3090 cluster.)
Planning on learning how to do this next week. If you have any articles/video that could point me in the right direction, that would be highly appreciated.
It's just a task library (I used Dramatiq) with a worker that connects to it. Then the worker picks up tasks, processes them and uploads the results to R2.
You’re losing a huge market share by not solving this. I would have generated a few, but I’m not giving you or anyone else my email unless I really need something.
FYI, you still have "Lorem ipsum dolor sit amet, consec tetur adipis elit " in the pricing page.
Also, the ideas page has lots of weirdly low quality stuff (eg heath ledger joker does not look like heath ledger, baby yoda does not look like baby yoda, Totoro tattoos are generally bad, etc.). And then there are things like this: https://www.tattoosai.com/tattoos/62ecce4651c8873ccf2bbe67
The pricing model of a one time payment is also quite weird... I get that this is a quick project so it's not polished, but as I would not start charging for it.
Ah so you're not hosting SD yourself in the cloud, you're just using Dreamstudio's API? Excellent idea, much simpler setup, and $0.02 actually seems pretty cheap.
So if I understand correctly, the web site seems to take input from the users, build a phrase from the selections(like maybe,"A black and white tattoo of a bouquet of roses, in the old school style") and then pass that phrase to stable diffusion and generate the images. If you have access to DreamStudio, I think we can try out phrases like this for free and see the output.
It would be nice to explore the catalog of created art. I don't want to register nor generate myself, but I'd like to see the creativity in people's prompt leading to the creativity of the AI.
Definitely a cool use for this sort of thing. I've been playing with SD for the past couple of weeks on my home desktop. It's not at a stage where you'd necessarily use its output directly as a print or certainly a tattoo on your skin.
But it's great at generating ideas for things that you can then touch up or recreate without the usual AI artifacts and tells (unless that's the aesthetic you're going for).
Kudos for the idea and putting the service together.
In practice I wouldn't let my tattoo artist ink myself if they couldn't come up with sketches like the ones on the explore designs page.
But then I'm not a tattoo artist and maybe thats exactly the extra bit creativity an artist sometime wants/needs
The lstein fork [1] of the CompVis main repo is working on "Apple Silicon" based machines (and may work on Intel based too). It's not very fast though, ~3.5 minutes for 50 steps on my 16GB M1 Mini, whereas I understand that a 3080 can spit them out in the 30 second range. M<x> machines with higher GPU core count I would suppose are faster.
Hey you may not get this reply till much later but I'd love more info.
From my research in the last couple days, it only seems that PyTorch will work with AMD cards in combo with RocM, and RocM specifically isn't supporting older AMD gpus that you find on Mac laptops from just 2020.
MPS is Metal Performance Shaders which is Apple's MacOS libraries for ML workloads. The MPS libraries are only on MacOS, but support both Apple Silicon and AMD GPUs. This means that on MacOS, you specify the 'mps' backend to pytorch as the device instead of 'cuda' or 'cpu', and MacOS runs operations on whatever GPU is available, be it an M1 or an M2 or an AMD GPU.
I don't have an AMD Mac to test on but on the Nvidia side of things there's support now for 4GiB cards with the right configuration, so it might be possible.
Maybe it's just that we're not especially excited to see the re-privatization of a model that was only made public about a week ago, wrapped up and packaged to take out a whole industry.
OTOH maybe it's just that aside from the dubious morality of the project, this guy doesn't deserve any acclaim for what he's done.
I’d be concerned if they setup a robot arm to do on demand tattoos but this is just a project that’s doing what every other project is doing and charging for compute time.
I think it would be useful for tattoo artists as well. I'd be totally on-board if my artist whipped something along these lines out on their tablet during a consult, we browsed through some designs based on my inputs, and then have my artist modify/add their personal touch.