Stable Diffusion stuff

Stable Diffusion Links


human motion diffusion

Source Code


stable dream-fusion

A working implementation of text-to-3D dreamfusion, powered by stable diffusion.

source code

A pytorch implementation of the text-to-3D model Dreamfusion, powered by the Stable Diffusion text-to-2D model.

The original paper's project page: DreamFusion: Text-to-3D using 2D Diffusion.


3d diffusion

homepage

We present 3DiM (pronounced "three-dim"), a diffusion model for 3D novel view synthesis from as few as a single image. The core of 3DiM is an image-to-image diffusion model -- 3DiM takes a single reference view and a relative pose as input, and generates a novel view via diffusion. 3DiM can then generate a full 3D consistent scene following our novel stochastic conditioning sampler. 3DiMs are geometry free, do not rely on hyper-networks or test-time optimization for novel view synthesis, and allow a single model to easily scale to a large number of scenes.

paper


phenaki


fast-stable-diffusion

fast-stable-diffusion, +25% speed increase + memory efficient.

source code


Source Code

This is the code that we used build our CLIP semantic search engine for krea.ai. This work heavily inspired by clip-retrieval, autofaiss, and CLIP-ONNX. We keept our implementation simple, focused on working with data from open-prompts, and prepared to run efficiently on a CPU.

CLIP will serve us to find generated images given an input that can be a prompt or another image. It could also be used to find other prompts given the same input.


open prompts

Source code

Home page

Open Prompts contains the data we use to build krea.ai. Now, you can get access to this data too.

You can either download a (large) CSV file with image links and meta-data of >10M generations, or access it through our free API (still in development).

Everyone is welcome to contribute with their own prompts, and ideas.

If you want to use this data to implement a semantic search engine with CLIP (like we did), check out prompt-search.


ImaginAIry

AI imagined images. Pythonic generation of stable diffusion images. "Just works" on the roadmap is adding a bunch of other models too.

Source Code

Features:


txt2mask prompt inpainting

This project helps you do prompt-based inpainting without having to paint the mask - using Stable Diffusion and Clipseg

Source code

Step-by-Step Tutorial

This project helps you do prompt-based inpainting without having to paint the mask - using Stable Diffusion and Clipseg

It takes 3 mandatory inputs.

  1. Input Image URL
  2. Prompt of the part in the input image that you want to replace
  3. Output Prompt

There are certain parameters that you can tune

  1. Mask Precision
  2. Stable Diffusion Generation Strength

diffuser models

source code

Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch

For the first release, 🤗 Diffusers focuses on text-to-image diffusion techniques. However, diffusers can be used for much more than that! Over the upcoming releases, we'll be focusing on:


Bible illustrated with stable diffusion

https://baible.com/


stable diffusion UI

Source

Back end independent UI


Japanese stable diffusion

Source

Japanese Stable Diffusion is a Japanese-specific latent text-to-image diffusion model.


Photoshop plugin

Probably expensive proprietary bullshit idk I don't Photoshop

https://christiancantrell.com/#ai-ml


prompt search

https://prompthero.com/

https://lexica.art


Stable diffusion for tiling textures

Homepage

Source

A fork of the Stable Diffusion Cog model that outputs tileable images for use in 3D applications such as Monaverse


Stable Diffusion: Tutorials, Resources, and Tools

https://stackdiary.com/stable-diffusion-resources/


stablediffusiond

Source

A daemon which watches for messages on RabbitMQ and runs Stable Diffusion


How to come up with good prompts for AI image generation

https://www.sagiodev.com/blog/how_to_engineer_prompts_for_ai_images/


Prompt-to-Prompt Image Editing with Cross Attention Control in Stable Diffusion

Home Page

Source Code


A better (?) way of doing img2img by finding the noise which reconstructs the original image

Home page

Source code


dream textures

Source Code

Stable Diffusion built-in to the Blender shader editor.


inpainter

Homepage

Source Code

Inpainting is a process where missing parts of an artwork are filled in to present a complete image. This open-source demo uses the Stable Diffusion machine learning model and Replicate's API to inpaint images right in your browser.


automatic webui

Source Code