Audiobus: Use your music apps together.

What is Audiobus?Audiobus is an award-winning music app for iPhone and iPad which lets you use your other music apps together. Chain effects on your favourite synth, run the output of apps or Audio Units into an app like GarageBand or Loopy, or select a different audio interface output for each app. Route MIDI between apps — drive a synth from a MIDI sequencer, or add an arpeggiator to your MIDI keyboard — or sync with your external MIDI gear. And control your entire setup from a MIDI controller.

Download on the App Store

Audiobus is the app that makes the rest of your setup better.

"A.I." (Machine Learning Algorithms) To Generate Art

145791022

Comments

  • @AudioGus said:

    But yah, nerd stuff aside, this will dramatically impact a lot of livelihoods and unfortunately some people will not be able to adapt or may essentially be edged out in part by their own work having been machine learned from.

    This is how it’s derived from other’s works, which are then machine learned and regurgitated, so future artists are to be seeder’s to this machines database? Then sculptors, then musicians, then writers, then actors…

  • edited August 2022

    @AudioGus said:

    I get what you are saying but that isn't really how this works. When you type a prompt in, the AI does not go to the internet looking for images. You can actually run this stuff locally offline. If you are interested there are a bunch of videos that describe diffusion rendering and how datasets are made and what latent space is etc. It is not like a database of images that are then copy/pasted/rearranged. It is more like how a person can learn what something looks like and has the skill to reproduce it, but obviously that is not required for a person to use this.

    The systems are initially trained using existing photos and artwork. No system is clever enough to have rendered its own versions of say, John Lennon, without external reference. Or a tree. Flower. Acorn…in the style of Picasso, Dali, Froud etc.

    But I’m open to being proved wrong if you can explain how it has.

    I tried a couple of these generators and they seem pretty clued up on what ‘Boris Johnson eating a pie made from eels’ should look like. Quite unpleasant actually.

    @AudioGus said:

    But yah, nerd stuff aside, this will dramatically impact a lot of livelihoods and unfortunately some people will not be able to adapt or may essentially be edged out in part by their own work having been machine learned from.

    I’m surprised (though by now I probably shouldn’t be) that people on this forum are promoting something that can potentially disrupt thousands of creative peoples incomes, and entire careers.

    Let’s try an example of where this could go if it caught on musically.

    ‘Hey Siri - play me Silver Machine in the style of the Beatles, White Album era’.

    So off our helpful AI producer goes, and applies a bunch of Beatle samples across a MIDI file of the Hawkwind track, with a style template of amps, equipment, EQ and fx circa 1968.

    Great, I always wanted to hear more Beatles music.

    But now we don’t need to buy records, or stream existing artists products because we can generate our own in a few seconds. So we can do away with all those shops. And bands….there’s enough stuff AI has trained on to create trillions of new tracks, every day. So I guess no more musical hardware, since bands and musicians are now obsolete, or at best, extremely scarce. Oh yeah, music apps too…bye bye! Who needs ‘em!!! All the associated musical services, this forum….etc., bye bye…

    Of course what might stop the above example is a flurry of copyright cases….but if the AI has rendered and re-rendered everything down so it’s hard to prove, it could still happen.

    Progress, the new shiny. It’s not always better than the old thing.

  • @monz0id said:
    The systems are initially trained using existing photos and artwork. No system is clever enough to have rendered its own versions of say, John Lennon, without external reference. Or a tree. Flower. Acorn…in the style of Picasso, Dali, Froud etc.

    Indeed they are trained on a large Common Crawl dataset, which has scraped the internet for content (text-image pairs). There is quite a bit of creativity and ingenuity in how these models have been made; using a process of noising and denoising in a so-called latent space. Both text and image can be transformed to a latent space, where this process is carried out. Anyway, the end result is an AI model that can generate a meaningful image by just starting from noise, conditioned by the prompt entered by the user. You can liken it to a person that has read and seen everything online and thereafter acquired the skill to recreate any imagined idea.

    So yeah, no scraping during image generation, but initial scraping to get train data.

    @AudioGus said:

    But yah, nerd stuff aside, this will dramatically impact a lot of livelihoods and unfortunately some people will not be able to adapt or may essentially be edged out in part by their own work having been machine learned from.

    @monz0id said:
    I’m surprised (though by now I probably shouldn’t be) that people on this forum are promoting something that can potentially disrupt thousands of creative peoples incomes, and entire careers.

    This can't be an argument in the long run, it is just a rephrased luddite response.

    @monz0id said:
    So off our helpful AI producer goes, and applies a bunch of Beatle samples across a MIDI file of the Hawkwind track, with a style template of amps, equipment, EQ and fx circa 1968.

    No, the AI would not apply a bunch of samples to generate the output. It would generate the entire waveform in a similar denoising process, conditioned by your initial text prompt. Yes, it would need those samples and texts as training data in order to learn how to do that, much like any person would.

    @monz0id said:
    [...] Oh yeah, music apps too…bye bye! Who needs ‘em!!! All the associated musical services, this forum….etc., bye bye…

    If you enjoy the process then they will still be around. Not every garment is knitted by machines today.

    Maybe better to embrace and combine like @echoopera?

    @monz0id said:
    Of course what might stop the above example is a flurry of copyright cases….

    I agree, there are some interesting issues to discuss here. If the model outputs something that is too similar to what it found in its Common Crawl train data, would it be plagiarism that is most relevant to discuss? Since its learning process can be likened to a person seeing everything and just learning to recreate, it doesn't feel like copyright is the proper issue. No material is directly re-used in the output.

    This topic should apply to large language models, too. They learn the content and style of texts and can generate output from a prompt. There are a few online mentions of discussing law issues related to this, but not much. This article seems to discuss the matter (just a quick google find):
    The_sentimental_fools_and_the_fictitious_authors_rethinking_the_copyright_issues_of_AI-generated_contents_in_China

  • edited August 2022

    @bleep said:

    So yeah, no scraping during image generation, but initial scraping to get train data.

    What about new content? When the next D-list celebrity hits the scene - where will the 'data' come from to generate their likeness - will they do more scraping?

    How about the data scraped previously - have those artists and photographers received a credit, payment? Were they even asked if their work could be monetised by a new company?

    @bleep said:

    This can't be an argument in the long run, it is just a rephrased luddite response.

    From your link:

    "Modern usage - Nowadays, the term "luddite" often is used to describe someone who is opposed or resistant to new technologies."

    The majority of the artists who work in this genre would be offended by that label, since most have openly embraced new technology, and have invested and used it in their work.

    Technology can be an extremely useful tool for artists work, and sure you can use elements of what's been artificially generated or use it as a reference. But the current application of this new stuff seems to be employed as a replacement for actual creativity, learned skills, and talent.

    @bleep said:

    No, the AI would not apply a bunch of samples to generate the output. It would generate the entire waveform in a similar denoising process, conditioned by your initial text prompt. Yes, it would need those samples and texts as training data in order to learn how to do that, much like any person would.

    So yes to scraping samples from existing musicians work then, but regenerating them so they sound exactly like the originals. Right, gotcha.

    @bleep said:

    Since its learning process can be likened to a person seeing everything and just learning to recreate

    But it's not. You've told me it's scraped existing artists work, 'learned' it, and is then re-arranging the output with other stuff like a Google Image Search on steroids for money, with none of the revenue going back to the original artist. The 'luddite' who now doesn't have a job.

  • edited August 2022

    @monz0id said:

    @AudioGus said:

    I get what you are saying but that isn't really how this works. When you type a prompt in, the AI does not go to the internet looking for images. You can actually run this stuff locally offline. If you are interested there are a bunch of videos that describe diffusion rendering and how datasets are made and what latent space is etc. It is not like a database of images that are then copy/pasted/rearranged. It is more like how a person can learn what something looks like and has the skill to reproduce it, but obviously that is not required for a person to use this.

    The systems are initially trained using existing photos and artwork. No system is clever enough to have rendered its own versions of say, John Lennon, without external reference. Or a tree. Flower. Acorn…in the style of Picasso, Dali, Froud etc.

    But I’m open to being proved wrong if you can explain how it has.

    Yes, the systems are trained by looking at images. Typical story: Years ago, when I started out, I was into moody goth boy paintings like Beksinski and I made my own hamfisted renditions of these pieces, but with different compositions and content. Where Beksinski would have a burned out skeletal car on a landscape I would put a locomotive. So I would look at photos of locomotives and render with a similar treatment etc. Eventually I was tired of making Beksinski knockoffs so I would incorporate things I saw of other artists in different genres and my own observances of nature etc. I spent a few years working on developing a style and eventually got bored and moved on to a whole new set of influences, a whole new style, now incorporating far more, dozens of artists, a whole new spectrum without being focused on just one. AI essentially does this same thing but within weeks of training on a gigantic mass of data spanning billions of images.

    I tried a couple of these generators and they seem pretty clued up on what ‘Boris Johnson eating a pie made from eels’ should look like. Quite unpleasant actually.

    Yah it doesn't take long to get that stuff out of your system.

    @AudioGus said:

    But yah, nerd stuff aside, this will dramatically impact a lot of livelihoods and unfortunately some people will not be able to adapt or may essentially be edged out in part by their own work having been machine learned from.

    I’m surprised (though by now I probably shouldn’t be) that people on this forum are promoting something that can potentially disrupt thousands of creative peoples incomes, and entire careers.

    Oh I would say it is not even a potential. It is a forgone conclusion and has been for quite some time now but most people are just hearing about it. I recently had a conversation with an Art Director at a rather large games studio and they said there will definitely be a culling of the herd. In my case, I am more in the shock and awe phase at the rate of progress over the past year rather than promotion. It really doesn't need to be promoted at all. There are simply those who need to embrace it to survive. If there are people who do not want or need to use it and they can still survive, good on them. Many of us are in situations where there simply is no choice.

    Let’s try an example of where this could go if it caught on musically.

    ‘Hey Siri - play me Silver Machine in the style of the Beatles, White Album era’.

    So off our helpful AI producer goes, and applies a bunch of Beatle samples across a MIDI file of the Hawkwind track, with a style template of amps, equipment, EQ and fx circa 1968.

    Great, I always wanted to hear more Beatles music.

    The vast majority of people using these tools have no interest in trying to make knockoffs of a singular artist. Most prompts people make draw from many influences at once to form new hybrids. Really that is what most artists should fear, other artists who embrace AI to make new novel connections, not people who use AI to essentially make forgeries of their work. One thing I think artists fear is that the value of a having a strong singular readily identifiable style or identity will be far more difficult to maintain as variation and novelty will be pushed to new extremes. Doing the same thing for years on end simply will not be as well rewarded as switching up and developing whole new styles and ideas with increasing frequency.

    But now we don’t need to buy records, or stream existing artists products because we can generate our own in a few seconds. So we can do away with all those shops. And bands….there’s enough stuff AI has trained on to create trillions of new tracks, every day. So I guess no more musical hardware, since bands and musicians are now obsolete, or at best, extremely scarce. Oh yeah, music apps too…bye bye! Who needs ‘em!!! All the associated musical services, this forum….etc., bye bye…

    Picard on the Holodeck, "Computer, play an undiscovered musical composition by Rachmaninov (Earl Grey, hot, etc)". Nobody saw that as a bad thing in the 80s but here we are.

    Of course what might stop the above example is a flurry of copyright cases….but if the AI has rendered and re-rendered everything down so it’s hard to prove, it could still happen.

    Progress, the new shiny. It’s not always better than the old thing.

    Yah in my case it is just about keeping up with how to incorporate AI for sake of survival. This stuff has been floating in the wings for years now so finally having it in this form is a bit of a terrifying relief. It is a new thing to wrestle with in terms of livelihood but at least it is fully in the light and revealed now.

  • edited August 2022

    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that are linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    To those who are so vocal against this technology, are you also against midi generators, sequencers, DAWs, digital instruments because they make it easy for music theory novice to generate music? There are hundreds and thousands of people making music these days than 30-50 years ago because technology allows access to make music digitally and not a physical instrument or recording equipment. Is that a bad thing too?

  • @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

  • @AudioGus said:

    @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

    Not yet.

  • edited August 2022

    @auxmux said:

    @AudioGus said:

    @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

    Not yet.

    It is a very different beast. The same prompt and seed number generates the same result. You can then tweak the prompt and see the results effect the image. It takes a couple seconds to render an image on a beefy gaming GPU and very soon will be close to realtime. The level of coherence is insane. But yes, I agree that human interaction will of course have the edge as I don't feel humans will be completely useless for quite some time. ;)

  • @AudioGus said:

    @auxmux said:

    @AudioGus said:

    @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

    Not yet.

    It is a very different beast. The same prompt and seed number generates the same result. You can then tweak the prompt and see the results effect the image. It takes a couple seconds to render an image on a beefy gaming GPU and very soon will be close to realtime. The level of coherence is insane. But yes, I agree that human interaction will of course have the edge as I don't feel humans will be completely useless for quite some time. ;)

    Cool, are you running it locally or on Nightcafe or another online implementation? Trying to see the best option to use. I prefer using my iPad, which is convenient for Dalle and Midjourney.

  • @auxmux said:

    @AudioGus said:

    @auxmux said:

    @AudioGus said:

    @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

    Not yet.

    It is a very different beast. The same prompt and seed number generates the same result. You can then tweak the prompt and see the results effect the image. It takes a couple seconds to render an image on a beefy gaming GPU and very soon will be close to realtime. The level of coherence is insane. But yes, I agree that human interaction will of course have the edge as I don't feel humans will be completely useless for quite some time. ;)

    Cool, are you running it locally or on Nightcafe or another online implementation? Trying to see the best option to use. I prefer using my iPad, which is convenient for Dalle and Midjourney.

    I used the discord bot during the first beta (which was the best) and now the DreamStudio website beta (2nd best), which is also developed by StabilityAI and have now settled to a google collab python notebook (tied for second best), mostly for the batch rendering but also because it lets me feed in my own sketches to build off of.

    I kicked the tires on the Nightcafe version but it feels like a watered down version and is still pretty goopy.

  • edited August 2022

    @AudioGus said:
    The vast majority of people using these tools have no interest in trying to make knockoffs of a singular artist. Most prompts people make draw from many influences at once to form new hybrids. Really that is what most artists should fear, other artists who embrace AI to make new novel connections, not people who use AI to essentially make forgeries of their work. One thing I think artists fear is that the value of a having a strong singular readily identifiable style or identity will be far more difficult to maintain as variation and novelty will be pushed to new extremes. Doing the same thing for years on end simply will not be as well rewarded as switching up and developing whole new styles and ideas with increasing frequency.

    I think it would be a bit of both - if there’s an easy way to generate a piece of music via an app, listen and/or share it on social media, then it’ll catch on with the public.

    But yeah, I can imagine producers and publishers will be rubbing their hands with glee, at the thought of cutting out the band/artist.

    @auxmux said:

    To those who are so vocal against this technology, are you also against midi generators, sequencers, DAWs, digital instruments because they make it easy for music theory novice to generate music? There are hundreds and thousands of people making music these days than 30-50 years ago because technology allows access to make music digitally and not a physical instrument or recording equipment. Is that a bad thing too?

    No, because they’re using them as tools and instruments, they’re not scraping chunks of some bands existing music and repurposing it as their own. And if they did, they’d be sued.

    I think of Ableton Live, for example, as the audio equivalent of Photoshop. It doesn’t knock out a song in 3 minutes based on a few keywords and some scraped content, but allows me to build a multi-layered piece of music using my own recorded notes and audio. Just as Photoshop provides tools for editing my photos and creating digital art from scratch.

  • @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that are linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    To those who are so vocal against this technology, are you also against midi generators, sequencers, DAWs, digital instruments because they make it easy for music theory novice to generate music? There are hundreds and thousands of people making music these days than 30-50 years ago because technology allows access to make music digitally and not a physical instrument or recording equipment. Is that a bad thing too?

    You’re hardly comparing like for like, midi generators? I don’t think they produce a finished ‘article’ yet, though given more time, it’s where we are heading. The application in these form’s should be worrying, in other fields it should be terrifying, it’s moving at such a pace people are conforming as best as possible less they are culled. Legislation will be AWOL in regards to this situation, it’s still trying to get a grip on the internet and social media.

  • edited August 2022

    @knewspeak said:
    The application in these form’s should be worrying, in other fields it should be terrifying, it’s moving at such a pace people are conforming as best as possible less they are culled. Legislation will be AWOL in regards to this situation, it’s still trying to get a grip on the internet and social media.

    Just imagine when the automated text bots become more polished - as well as social media accounts, there’ll be millions of completely automated websites with AI ‘staff’ promoting and reviewing AI generated content, so sophisticated it’ll even fool the search engine bots.

    The speed and ease of how they can generate content will totally overwhelm the ‘real’ stuff.

    Artists won’t get a look in. Unless they pay, of course.

    God help us when they turn their attention to politics.

  • @monz0id said:

    @AudioGus said:
    The vast majority of people using these tools have no interest in trying to make knockoffs of a singular artist. Most prompts people make draw from many influences at once to form new hybrids. Really that is what most artists should fear, other artists who embrace AI to make new novel connections, not people who use AI to essentially make forgeries of their work. One thing I think artists fear is that the value of a having a strong singular readily identifiable style or identity will be far more difficult to maintain as variation and novelty will be pushed to new extremes. Doing the same thing for years on end simply will not be as well rewarded as switching up and developing whole new styles and ideas with increasing frequency.

    I think it would be a bit of both - if there’s an easy way to generate a piece of music via an app, listen and/or share it on social media, then it’ll catch on with the public.

    But yeah, I can imagine producers and publishers will be rubbing their hands with glee, at the thought of cutting out the band/artist.

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

  • @monz0id said:

    @knewspeak said:
    The application in these form’s should be worrying, in other fields it should be terrifying, it’s moving at such a pace people are conforming as best as possible less they are culled. Legislation will be AWOL in regards to this situation, it’s still trying to get a grip on the internet and social media.

    Just imagine when the automated text bots become more polished - as well as social media accounts, there’ll be millions of completely automated websites with AI ‘staff’ promoting and reviewing AI generated content, so sophisticated it’ll even fool the search engine bots.

    The speed and ease of how they can generate content will totally overwhelm the ‘real’ stuff.

    Artists won’t get a look in. Unless they pay, of course.

    God help us when they turn their attention to politics.

    The consequences could be utterly crazy, but fits quite well into the craziness of the world taking shape, legislation can’t yet to decide who’s responsible when autonomous vehicles have accidents, the collateral impacts of the ‘machine learning world’ haven’t even started to be thought through. Of course it will be said ‘if we don’t do it, somebody else will’.

  • @AudioGus said:

    @auxmux said:

    @AudioGus said:

    @auxmux said:

    @AudioGus said:

    @auxmux said:
    There's a lot of conjecture here on something that's still being developed and imperfect in a lot of things.

    Based on the inputs I've done, the results are close to what I want but never perfect. Even when I put in the same prompts in multiple times, the results vary each time. In the way, I'm using this, I need to modify and edit the output to be satisfied with the results. So, I'd say human input is still just as important. You can't (yet) create a series of scenes that linear, so we're very far from what's being described as a possible outcome to this. It's better to fear the AI mechanical robots/dogs or other systems that are also being developed than this type of stuff.

    have you used Stable Diffusion?

    Not yet.

    It is a very different beast. The same prompt and seed number generates the same result. You can then tweak the prompt and see the results effect the image. It takes a couple seconds to render an image on a beefy gaming GPU and very soon will be close to realtime. The level of coherence is insane. But yes, I agree that human interaction will of course have the edge as I don't feel humans will be completely useless for quite some time. ;)

    Cool, are you running it locally or on Nightcafe or another online implementation? Trying to see the best option to use. I prefer using my iPad, which is convenient for Dalle and Midjourney.

    I used the discord bot during the first beta (which was the best) and now the DreamStudio website beta (2nd best), which is also developed by StabilityAI and have now settled to a google collab python notebook (tied for second best), mostly for the batch rendering but also because it lets me feed in my own sketches to build off of.

    I kicked the tires on the Nightcafe version but it feels like a watered down version and is still pretty goopy.

    Nice, I'll give DreamStudio a shot, thanks.

  • edited August 2022

    What’s the max resolution for Stable Diffusion renders @AudioGus is it still limited to 1024x1024?

    Fwiw, I’ve been using Pixelmator Photo to upres everything from MidJourney with great success.

  • @AudioGus said:

    @monz0id said:

    @AudioGus said:
    The vast majority of people using these tools have no interest in trying to make knockoffs of a singular artist. Most prompts people make draw from many influences at once to form new hybrids. Really that is what most artists should fear, other artists who embrace AI to make new novel connections, not people who use AI to essentially make forgeries of their work. One thing I think artists fear is that the value of a having a strong singular readily identifiable style or identity will be far more difficult to maintain as variation and novelty will be pushed to new extremes. Doing the same thing for years on end simply will not be as well rewarded as switching up and developing whole new styles and ideas with increasing frequency.

    I think it would be a bit of both - if there’s an easy way to generate a piece of music via an app, listen and/or share it on social media, then it’ll catch on with the public.

    But yeah, I can imagine producers and publishers will be rubbing their hands with glee, at the thought of cutting out the band/artist.

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    But it won’t be limited to creativity, already it’s in other area’s of research, even researching itself, extrapolate the future, which could be blisteringly fast approaching… machine’s learning machine’s, human’s hoping for a divine spark that imbues upon them a consciousness, a loving benevolent consciousness.

  • edited August 2022

    @AudioGus said:

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    I seem to remember a quote from Frank Zappa, where he praised the old cigar-chomping label bosses, as they were more inclined to take risks with new artists, whereas their younger replacements were always looking for the current safe, hip thing.

    God help us when PR companies take over that role. I guess they'll all be art directors soon too.

    AI music and art based on an algorithm.

    @knewspeak said:

    The consequences could be utterly crazy, but fits quite well into the craziness of the world taking shape, legislation can’t yet to decide who’s responsible when autonomous vehicles have accidents, the collateral impacts of the ‘machine learning world’ haven’t even started to be thought through.

    The World is totally nuts at the moment, and we need more skilled human interaction, not less.

  • @monz0id said:

    @AudioGus said:

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    I seem to remember a quote from Frank Zappa, where he praised the old cigar-chomping label bosses, as they were more inclined to take risks with new artists, whereas their younger replacements were always looking for the current safe, hip thing.

    God help us when PR companies take over that role. I guess they'll all be art directors soon too.

    AI music and art based on an algorithm.

    Yesterday I had a meeting at 10:30am. We brainstormed for an hour about a new game environment that didn't yet have a clear story, looking over previous images as general vague reference. After the meeting I then hit Stable Diffusion for about five hours and literally had what would have been six months worth of work done, all with a new theme and visuals I have never seen before. No art director or exec was needed / involved and when I presented the results at the end of the day it was all thumbs up from everyone. I think for now this empowers creatives with ideas to create tsunami nuclear bombs of persuasion. We are a ways off from an exec saying "Siri: give me a great idea / product" but I have no doubt that in some areas algorithmic creative will be a thing. My retirement is about 18 years away. No idea what even next year looks like. O_o

  • Haven’t followed this thread, so this may have been mentioned. But what are the best generative video apps for iOS, whether AI or not. I like VS but for most of my stuff I don’t need anything tempo based. I’d love something that creates beautiful visuals, doesn’t need to necessarily move in time with the music, whether by midi input, audio input or whatever.

  • edited August 2022

    @AudioGus said:

    @monz0id said:

    @AudioGus said:

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    I seem to remember a quote from Frank Zappa, where he praised the old cigar-chomping label bosses, as they were more inclined to take risks with new artists, whereas their younger replacements were always looking for the current safe, hip thing.

    God help us when PR companies take over that role. I guess they'll all be art directors soon too.

    AI music and art based on an algorithm.

    Yesterday I had a meeting at 10:30am. We brainstormed for an hour about a new game environment that didn't yet have a clear story, looking over previous images as general vague reference. After the meeting I then hit Stable Diffusion for about five hours and literally had what would have been six months worth of work done, all with a new theme and visuals I have never seen before. No art director or exec was needed / involved and when I presented the results at the end of the day it was all thumbs up from everyone. I think for now this empowers creatives with ideas to create tsunami nuclear bombs of persuasion. We are a ways off from an exec saying "Siri: give me a great idea / product" but I have no doubt that in some areas algorithmic creative will be a thing. My retirement is about 18 years away. No idea what even next year looks like. O_o

    I think that’s a positive use for it, using the results as a ‘mood/story board’, a sketchbook of ideas.

    It’s the ‘replacement’ for something, or someone, that bothers me, rather than an ‘extra tool’. The potential for a project manager to try it out for themselves, one evening at home, and the next day suggest to the directors they can save a few salaries by laying off the skilled artists, and getting an intern to generate some images instead.

  • edited August 2022

    None of us is advocating replacing artists and designers. There is always those that use any technology for a less than altruistic purpose. It's something that has to be accepted and expected, but it doesn't make sense to deride something will continue to evolve.

    If/when it becomes sentient (LOL), most of us won't be around anyhow.

  • @auxmux said:
    None of us is advocating replacing artists and designers.

    What you advocate is neither here nor there. You have zero control over where this will go, and whose livelihoods it will destroy.

  • @monz0id said:

    @AudioGus said:

    @monz0id said:

    @AudioGus said:

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    I seem to remember a quote from Frank Zappa, where he praised the old cigar-chomping label bosses, as they were more inclined to take risks with new artists, whereas their younger replacements were always looking for the current safe, hip thing.

    God help us when PR companies take over that role. I guess they'll all be art directors soon too.

    AI music and art based on an algorithm.

    Yesterday I had a meeting at 10:30am. We brainstormed for an hour about a new game environment that didn't yet have a clear story, looking over previous images as general vague reference. After the meeting I then hit Stable Diffusion for about five hours and literally had what would have been six months worth of work done, all with a new theme and visuals I have never seen before. No art director or exec was needed / involved and when I presented the results at the end of the day it was all thumbs up from everyone. I think for now this empowers creatives with ideas to create tsunami nuclear bombs of persuasion. We are a ways off from an exec saying "Siri: give me a great idea / product" but I have no doubt that in some areas algorithmic creative will be a thing. My retirement is about 18 years away. No idea what even next year looks like. O_o

    I think that’s a positive use for it, using the results as a ‘mood/story board’, a sketchbook of ideas.

    It’s the ‘replacement’ for something, or someone, that bothers me, rather than an ‘extra tool’. The potential for a project manager to try it out for themselves, one evening at home, and the next day suggest to the directors they can save a few salaries by laying off the skilled artists, and getting an intern to generate some images instead.

    It does mean that some games will need far fewer concept artists.

  • @auxmux said:
    None of us is advocating replacing artists and designers. There is always those that use any technology for a less than altruistic purpose. It's something that has to be accepted and expected, but it doesn't make sense to deride something will continue to evolve.

    If/when it becomes sentient (LOL), most of us won't be around anyhow.

    I don’t think it needs the handicap of sentience, it seems to be doing fine without it.

  • @auxmux said:
    None of us is advocating replacing artists and designers. There is always those that use any technology for a less than altruistic purpose. It's something that has to be accepted and expected, but it doesn't make sense to deride something will continue to evolve.

    If/when it becomes sentient (LOL), most of us won't be around anyhow.

    I hope to be retired and in the olde home enjoying it at that point.

  • @AudioGus said:

    @monz0id said:

    @AudioGus said:

    @monz0id said:

    @AudioGus said:

    I think a new form of creative that is also marketing / internet / social media savvy will emerge to cut out the cigar chompers. The 'Mr Beasts' of art and entertainment capable of making epic works that used to take massive corporate operations to execute on.

    I seem to remember a quote from Frank Zappa, where he praised the old cigar-chomping label bosses, as they were more inclined to take risks with new artists, whereas their younger replacements were always looking for the current safe, hip thing.

    God help us when PR companies take over that role. I guess they'll all be art directors soon too.

    AI music and art based on an algorithm.

    Yesterday I had a meeting at 10:30am. We brainstormed for an hour about a new game environment that didn't yet have a clear story, looking over previous images as general vague reference. After the meeting I then hit Stable Diffusion for about five hours and literally had what would have been six months worth of work done, all with a new theme and visuals I have never seen before. No art director or exec was needed / involved and when I presented the results at the end of the day it was all thumbs up from everyone. I think for now this empowers creatives with ideas to create tsunami nuclear bombs of persuasion. We are a ways off from an exec saying "Siri: give me a great idea / product" but I have no doubt that in some areas algorithmic creative will be a thing. My retirement is about 18 years away. No idea what even next year looks like. O_o

    I think that’s a positive use for it, using the results as a ‘mood/story board’, a sketchbook of ideas.

    It’s the ‘replacement’ for something, or someone, that bothers me, rather than an ‘extra tool’. The potential for a project manager to try it out for themselves, one evening at home, and the next day suggest to the directors they can save a few salaries by laying off the skilled artists, and getting an intern to generate some images instead.

    It does mean that some games will need far fewer concept artists.

    First they outsourced the blue collars, then the white. Good luck and best wishes.

  • edited August 2022

    @echoopera said:
    What’s the max resolution for Stable Diffusion renders @AudioGus is it still limited to 1024x1024?

    I am not sure. 960 x 512 is as large as I go (for 16:9) and then Topaz Gigapixel is great for uprezzing for presentations. Typically I 3d / overpaint everything that makes the filter so I dont need tons of pixels.

    Fwiw, I’ve been using Pixelmator Photo to upres everything from MidJourney with great success.

    Yah Pixelmator I think is the best on iOS.

Sign In or Register to comment.