Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

"A.I." (Machine Learning Algorithms) To Generate Art

If you're interested in this (literally) fast evolving field, this might be of interest to you.

https://aiartists.org/ai-generated-art-tools

«13456713

Comments

  • edited April 22

    Looks cool.

    I foresee TinyML bridging the gap between ML and the Maker movement. I want to see some educated musical controller "things" at the edge.

  • edited April 21

    @mojozart said:
    Looks cool.

    I foresee TinyML bridging the gap between ML and the Maker movement. I want to see some educated musical controllers "things" at the edge.

    Context: https://www.tinyml.org

    In addition to the breakthroughs in machine learning, remember that there is a LOT of money being poured into R&D in robotics, ranging from the countries of Korea, Japan and China committing a chunk of their GNP to engineering all the way to Elon Musk and the work they're doing at Tesla now toward developing a general purpose robot (called Optimus), which he claims will see production in a few years (you can safely add 3-5 years to his predictions). Things are going to change faster than people are prepared for and soon.

  • I have been using Disco Diffusion for roughing out concepts.

  • This thread of DALL-E 2 representations of people's twitter bios is, in equal parts, magnificent and terrifying. There's stuff in there that one might genuinely call "art" -

  • @AudioGus said:
    I have been using Disco Diffusion for roughing out concepts.

    I think George Lucas just discovered who he's hiring for his new art department.

  • Definitely. I’ve been interested in Ai for awhile now. Not the ins and outs but what it can do.




  • Just a few pieces I recently composed using Midjourney, which is currently the best model available outside of the “Open”AI Dall-E 2.

  • edited April 22

    @farfromsubtle said:
    Just a few pieces I recently composed using Midjourney, which is currently the best model available outside of the “Open”AI Dall-E 2.

    I think it is a tie between Midjourney, DALLE2 and Disco Diffusion depending on the task at hand. Ahh but even then Hypnogram and Nightcafe can be the best tool too.

  • Here’s my AI art. Glaze App, Studio Artist for Mac, Wombo app, and Nightcafé:

    JohnHolland.Art

    Eagerly awaiting DALL•E.

  • Of course, Wombo! Another one tied.

  • edited April 22

    @NeuM said:

    @mojozart said:
    Looks cool.

    I foresee TinyML bridging the gap between ML and the Maker movement. I want to see some educated musical controllers "things" at the edge.

    Context: https://www.tinyml.org

    In addition to the breakthroughs in machine learning, remember that there is a LOT of money being poured into R&D in robotics, ranging from the countries of Korea, Japan and China committing a chunk of their GNP to engineering all the way to Elon Musk and the work they're doing at Tesla now toward developing a general purpose robot (called Optimus), which he claims will see production in a few years (you can safely add 3-5 years to his predictions). Things are going to change faster than people are prepared for and soon.

    That's a very good point. Not to speak of how easy it is to let robots slowly take over control of the lives that people live who've got one. It can hear everything, it can see everything, it helps in taking decisions in such a friendly way that ppl will think it's the way it should be.

    What worries me more than the technology is how unconcerned ppl are about it and how most are using it without hesitation.

    Thanks though for the nice list of links @NeuM!

  • edited April 22

    @rs2000 said:

    @NeuM said:

    @mojozart said:
    Looks cool.

    I foresee TinyML bridging the gap between ML and the Maker movement. I want to see some educated musical controllers "things" at the edge.

    Context: https://www.tinyml.org

    In addition to the breakthroughs in machine learning, remember that there is a LOT of money being poured into R&D in robotics, ranging from the countries of Korea, Japan and China committing a chunk of their GNP to engineering all the way to Elon Musk and the work they're doing at Tesla now toward developing a general purpose robot (called Optimus), which he claims will see production in a few years (you can safely add 3-5 years to his predictions). Things are going to change faster than people are prepared for and soon.

    That's a very good point. Not to speak of how easy it is to let robots slowly take over control of the lives that people live who've got one. It can hear everything, it can see everything, it helps in taking decisions in such a friendly way that ppl will think it's the way it should be.

    What worries me more than the technology is how unconcerned ppl are about it and how most are using it without hesitation.

    Thanks though for the nice list of links @NeuM!

    One positive I can think of as true artificial intelligence and dextrous robots start to appear in our lives, is that these things in tandem will be able to do the dangerous and thankless work that will always be needed by people, plus they'll be able to spread out among the planets and asteroids to help colonization efforts within our solar system. This kind of thing may actually happen in our lifetimes. That's exciting.

    You're welcome for the list!

  • Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

  • @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

  • @farfromsubtle said:
    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Ah, thank you very much for the info, @farfromsubtle 👍

  • @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

  • @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.


    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

  • edited April 23

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    >

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    Those look great! Cant wait to get into Midjourney for the ease/speed.

    Wouldn’t “1000x slower (not exaggerating)” literally mean though that you would then expect these to take 500 minutes (over eight hours) to render in Disco? Once you get to know Disco people can render something like this in under a minute (reduce data set count, tweak cutn settings etc) then you also have the additional potential for much higher fidelity, coherence, control and animation options. Of course the highest bar of Disco potential takes more render time and it is definitely more complex and noodle-y, requiring you to set up your own render settings, but it is certainly not as inherently slow as you make it out to be, once you get to know it.

    That being said, if I had MidJourney, hell yah I would use it a ton, maybe/probably more. Hook a brother up with a beta invite? ;)

  • @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    The renderings that come out of these programs are good enough to provide inspiration for a designer to make some decisions on direction, as they are so impressionistic and lacking in coherent form and logic if one scrutinizes the details. So, they are really just broad brush sketches, which is what an illustrator or conceptual designer might provide in rough form, but for hundreds (or thousands) of dollars less at the initial stages.

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

  • @AudioGus said:
    Of course, Wombo! Another one tied.

    WOMBO is phenomenal it’s a free app and just had another cool update.

  • @NeuM said:

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    The renderings that come out of these programs are good enough to provide inspiration for a designer to make some decisions on direction, as they are so impressionistic and lacking in coherent form and logic if one scrutinizes the details. So, they are really just broad brush sketches, which is what an illustrator or conceptual designer might provide in rough form, but for hundreds (or thousands) of dollars less at the initial stages.

    With disco diffusion you can get very detailed, non impressionistic results that are 80% done.

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    With diffusion rendering using a bunch of datasets I think there are so many sources of input that are referenced (thousands of images for a single prompt) that it amounts to very low percentages of input from each. With something like DALLE2 though and whatever proprietary hackery it has going on I have seen things that look like cut and paste clip art results; the same dramatic low angle of a car etc. More coherent but less original than pure diffusion rendering does.

  • edited April 23

    @Poppadocrock said:

    @AudioGus said:
    Of course, Wombo! Another one tied.

    WOMBO is phenomenal it’s a free app and just had another cool update.

    Very cool, but it doesn't seem to do landscape though

  • @AudioGus said:

    @NeuM said:

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    The renderings that come out of these programs are good enough to provide inspiration for a designer to make some decisions on direction, as they are so impressionistic and lacking in coherent form and logic if one scrutinizes the details. So, they are really just broad brush sketches, which is what an illustrator or conceptual designer might provide in rough form, but for hundreds (or thousands) of dollars less at the initial stages.

    With disco diffusion you can get very detailed, non impressionistic results that are 80% done.

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    With diffusion rendering using a bunch of datasets I think there are so many sources of input that are referenced (thousands of images for a single prompt) that it amounts to very low percentages of input from each. With something like DALLE2 though and whatever proprietary hackery it has going on I have seen things that look like cut and paste clip art results; the same dramatic low angle of a car etc. More coherent but less original than pure diffusion rendering does.

    Found a list of Disco Diffusion creators/users: https://weirdwonderfulai.art/resources/disco-diffusion-70-plus-artist-studies/

  • @NeuM said:

    @AudioGus said:

    @NeuM said:

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    The renderings that come out of these programs are good enough to provide inspiration for a designer to make some decisions on direction, as they are so impressionistic and lacking in coherent form and logic if one scrutinizes the details. So, they are really just broad brush sketches, which is what an illustrator or conceptual designer might provide in rough form, but for hundreds (or thousands) of dollars less at the initial stages.

    With disco diffusion you can get very detailed, non impressionistic results that are 80% done.

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    With diffusion rendering using a bunch of datasets I think there are so many sources of input that are referenced (thousands of images for a single prompt) that it amounts to very low percentages of input from each. With something like DALLE2 though and whatever proprietary hackery it has going on I have seen things that look like cut and paste clip art results; the same dramatic low angle of a car etc. More coherent but less original than pure diffusion rendering does.

    Found a list of Disco Diffusion creators/users: https://weirdwonderfulai.art/resources/disco-diffusion-70-plus-artist-studies/

    Those aren't actually users. Those are prompts using those artists names.

  • @AudioGus said:

    @NeuM said:

    @AudioGus said:

    @NeuM said:

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    The renderings that come out of these programs are good enough to provide inspiration for a designer to make some decisions on direction, as they are so impressionistic and lacking in coherent form and logic if one scrutinizes the details. So, they are really just broad brush sketches, which is what an illustrator or conceptual designer might provide in rough form, but for hundreds (or thousands) of dollars less at the initial stages.

    With disco diffusion you can get very detailed, non impressionistic results that are 80% done.

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    With diffusion rendering using a bunch of datasets I think there are so many sources of input that are referenced (thousands of images for a single prompt) that it amounts to very low percentages of input from each. With something like DALLE2 though and whatever proprietary hackery it has going on I have seen things that look like cut and paste clip art results; the same dramatic low angle of a car etc. More coherent but less original than pure diffusion rendering does.

    Found a list of Disco Diffusion creators/users: https://weirdwonderfulai.art/resources/disco-diffusion-70-plus-artist-studies/

    Those aren't actually users. Those are prompts using those artists names.

    Ah, sorry about that.

  • @AudioGus said:

    @farfromsubtle said:

    @AudioGus said:

    @farfromsubtle said:

    @Artj said:
    Anyone here tried both Nightcafe and Wombo? They look very similar - same engines/developers perhaps?

    They are all based on the same thing. VQGAN + Clip.

    Midjourny also uses that but their model has been tweaked for far better coherence than the others out there.

    Seems like it for portraits but not so much environments.

    I was quite pleased with these. I was trying to get some visualizations for a Memphis Group style movie set for a project I am working on.

    >

    Though I believe Disco Diffusion is better at environments it is also about 1000x slower (not exaggerating). Each of those shots above took about 30 seconds.

    Those look great! Cant wait to get into Midjourney for the ease/speed.

    Wouldn’t “1000x slower (not exaggerating)” literally mean though that you would then expect these to take 500 minutes (over eight hours) to render in Disco? Once you get to know Disco people can render something like this in under a minute (reduce data set count, tweak cutn settings etc) then you also have the additional potential for much higher fidelity, coherence, control and animation options. Of course the highest bar of Disco potential takes more render time and it is definitely more complex and noodle-y, requiring you to set up your own render settings, but it is certainly not as inherently slow as you make it out to be, once you get to know it.

    That being said, if I had MidJourney, hell yah I would use it a ton, maybe/probably more. Hook a brother up with a beta invite? ;)

    Yeah that is just my experience with the colab notebook. But yeah I didn’t put a ton of time into to get the really incredible results I have seen from some people.

  • edited April 24

    @NeuM said:

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    The models are trained on millions upon millions of images. The way that a neural network solidifies an image via diffusion is very much like how a human does. Every work an artist makes draws a tiny bit upon everything they have every seen, except the model has seen and remember more images than a human every could.

    I would actually say that a human artist has a greater chance of accidentally creating something too similar to another work than the model does.

  • @AudioGus said:

    @Poppadocrock said:

    @AudioGus said:
    Of course, Wombo! Another one tied.

    WOMBO is phenomenal it’s a free app and just had another cool update.

    Very cool, but it doesn't seem to do landscape though

    Not yet. I’d definitely welcome that. I think they take requests through the app or maybe their email is in the App Store I saw it somewhere.

  • @farfromsubtle said:

    @NeuM said:

    On the downside, you're never 100% sure where these images are coming from since they are derived from many, many sources of input. If the algorithm creates something too similar to a real-world thing, copyright infringement might become an issue.

    The models are trained on millions upon millions of images. The way that a neural network solidifies an image via diffusion is very much like how a human does. Every work an artist makes draws a tiny bit upon everything they have every seen, except the model has seen and remember more images than a human every could.

    I would actually say that a human artist has a greater chance of accidentally creating something too similar to another work than the model does.

    It’s easier to sue a human than an algorithm. ;)

Sign In or Register to comment.