Stable Diffusion stuff
Stable Diffusion Links
human motion diffusion
stable dream-fusion
A working implementation of text-to-3D dreamfusion, powered by stable diffusion.
A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model.
The original paper's project page: DreamFusion: Text-to-3D using 2D Diffusion.
3d diffusion
We present 3DiM (pronounced "three-dim"), a diffusion model for 3D novel view synthesis from as few as a single image. The core of 3DiM is an image-to-image diffusion model -- 3DiM takes a single reference view and a relative pose as input, and generates a novel view via diffusion. 3DiM can then generate a full 3D consistent scene following our novel stochastic conditioning sampler. 3DiMs are geometry free, do not rely on hyper-networks or test-time optimization for novel view synthesis, and allow a single model to easily scale to a large number of scenes.
phenaki
fast-stable-diffusion
fast-stable-diffusion, +25% speed increase + memory efficient.
prompt-search
This is the code that we used build our CLIP semantic search engine for krea.ai. This work heavily inspired by clip-retrieval, autofaiss, and CLIP-ONNX. We keept our implementation simple, focused on working with data from open-prompts, and prepared to run efficiently on a CPU.
CLIP will serve us to find generated images given an input that can be a prompt or another image. It could also be used to find other prompts given the same input.
open prompts
Open Prompts contains the data we use to build krea.ai. Now, you can get access to this data too.
You can either download a (large) CSV file with image links and meta-data of >10M generations, or access it through our free API (still in development).
Everyone is welcome to contribute with their own prompts, and ideas.
If you want to use this data to implement a semantic search engine with CLIP (like we did), check out prompt-search.
ImaginAIry
AI imagined images. Pythonic generation of stable diffusion images. "Just works" on the roadmap is adding a bunch of other models too.
Features:
- txt2mask automated replacement
- codeformer face fix
- RealESRGAN upscaling
- tiling images
- img2img
txt2mask prompt inpainting
This project helps you do prompt-based inpainting without having to paint the mask - using Stable Diffusion and Clipseg
This project helps you do prompt-based inpainting without having to paint the mask - using Stable Diffusion and Clipseg
It takes 3 mandatory inputs.
- Input Image URL
- Prompt of the part in the input image that you want to replace
- Output Prompt
There are certain parameters that you can tune
- Mask Precision
- Stable Diffusion Generation Strength
diffuser models
Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
- State-of-the-art diffusion pipelines that can be run in inference with just a couple of lines of code (see src/diffusers/pipelines). Check this overview to see all supported pipelines and their corresponding official papers.
- Various noise schedulers that can be used interchangeably for the prefered speed vs. quality trade-off in inference (see src/diffusers/schedulers).
- Multiple types of models, such as UNet, can be used as building blocks in an end-to-end diffusion system (see src/diffusers/models).
- Training examples to show how to train the most popular diffusion model tasks (see examples, e.g. unconditional-image-generation).
For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:
- Diffusers for audio
- Diffusers for reinforcement learning (initial work happening in https://github.com/huggingface/diffusers/pull/105).
- Diffusers for video generation
- Diffusers for molecule generation (initial work happening in https://github.com/huggingface/diffusers/pull/54)
Bible illustrated with stable diffusion
stable diffusion UI
Back end independent UI
Japanese stable diffusion
Japanese Stable Diffusion is a Japanese-specific latent text-to-image diffusion model.
Photoshop plugin
Probably expensive proprietary bullshit idk I don't Photoshop
https://christiancantrell.com/#ai-ml
prompt search
Stable diffusion for tiling textures
A fork of the Stable Diffusion Cog model that outputs tileable images for use in 3D applications such as Monaverse
Stable Diffusion: Tutorials, Resources, and Tools
https://stackdiary.com/stable-diffusion-resources/
stablediffusiond
A daemon which watches for messages on RabbitMQ and runs Stable Diffusion
- No hot loading - Model stored in RAM (10GB~) for faster processing
- Daemon - existing solutions use a webserver, here we use a daemon which is more lightweight
- Less bloat - code and dependencies have been kept to a minimum
- Flexibility - request daemon, response daemon and queue system can be run independently, allowing for more efficient use of resources
- Easy to use - just run the daemon and send messages to the queue using send.py
How to come up with good prompts for AI image generation
https://www.sagiodev.com/blog/how_to_engineer_prompts_for_ai_images/
Prompt-to-Prompt Image Editing with Cross Attention Control in Stable Diffusion
A better (?) way of doing img2img by finding the noise which reconstructs the original image
dream textures
Stable Diffusion built-in to the Blender shader editor.
- Create textures, concept art, background assets, and more with a simple text prompt
- Quickly create variations on an existing texture
- Experiment with AI image generation
- Run the models on your machine to iterate without slowdowns from a service
inpainter
Inpainting is a process where missing parts of an artwork are filled in to present a complete image. This open-source demo uses the Stable Diffusion machine learning model and Replicate's API to inpaint images right in your browser.