well, lets just hope this nice discussion could serve to push a
little bit of granulation into the physical electronics world and
that doepfer (or any one else) would soon bring us some cv
granulation solution of whatever kind, that being deterministic,
heuristic, stochastic or whatever ic ...
it is quite amazing that with the long time granulation exists no one
have ever made any comercially avaible module nor stomp box of any
sort using this great sound technique without computers.
if any one knows about anything of the sort out there, please tell me!
by the way, to patch an output from reaktor must be a joke, heh
if i had to use a computer then why would i need any external modular
;)
best regards
gaspar
--- In
Doepfer_a100@yahoogroups.com
, Denis Gökdag <q-art@...> wrote:
>
> one reply to both of your posts :-)
>
> i can see your point about a "less access to individual grains,
more
> complex results on a single (small) module" approach. generating a
> typical grain "cloud" would be a lot of modules and patching with
my
> approach....but then what i do if i want the "cloud" effect i just
> patch an output from reaktor or an audio track containing such a
> texture into the a100. i agree that it would be a lot more
> comfortable to have this sort of capability in a module,
> though....ideally there would be both. but if there could only be
one
> of the two, i'd opt for the "discrete access" one, as this allows
for
> stuff you don't get from an external source.
>
>
> On 07. Mar 2008, at 9:28 PM, gasp_uleg wrote:
>
> > -by "density" i mean "distance between grains", which could go
from
> > almost total overlap of the grains when a high frecuency clock is
> > used with long grains.
> k, that's the way i'd use the term too. see my reply to the
"voice"
> issue further down.
>
> >
> > -by saying "dynamics" i simply mean "changes of volume in time" of
> > each grain (that is the vca+envelope of each grain).
> >
> > im not very sure that having so much control of evey single grain
> > would be so much usefull. i think more of grains as being
controlable
> > all togheter, apliyng the polyfonic envelope to all of them at the
> > same time from inside the dsp but controling the general atack and
> > release from cv.
>
> Well this is true for texture-type granular sounds, but if you
want
> to go down the "sequence of extremely short grains" route,
> essentially building a highly complex, time-varying waveform
rather
> than a dense soundscape, control of individual grain fade curves
> makes for a rather huge difference.
>
>
>
> >
> > i'm a bit confused about the "voice" term you use:
> >
> > i understand a "granular generator" as being a "granular
processor"
> > intimatelly associated to a sample player or "source file" as you
> > call it.
> basically a "grain" is the same as one voice of a sampler playing
> back a specific section of a larger sample ("source file"). If i.e.
2
> grains overlap, you technically need to play back two voices, or
> "streams". In more complex, "meta-style" grain generators like the
> reaktor grain cloud this fact is simply hidden, as this technical
> aspect is not of any interest by itself in the context of
generating
> a "cloud", all you want to know there is how "dense" the result
will
> be (in whichever way the module achieves this). But when you
*build*
> such a module in hardware, it is, as every grain needs to be
treated
> as a separate "entity" in memory anyway (thats just the way the
> method works).....so as you already have the grains available
> seperately you might as well make them have their individual
outputs
> and individual parameters to better fit the pradigm of a modular
> analog synthesizer. I call this combination of parameter controls,
> one playback "stream" and a set of outputs a "voice". In my
proposal
> there would be four of these "voices", effectively making it
possible
> for four grains to overlap at any one time. This would make the
> "cloud" type application somewhat possible (though not very
dense),
> while making lots of really cool resonation, electrification,
> glitching, noisy, scraping pulsed digital stuff possible :-)
>
>
>
> >
> > being able to select diferent files for that "source file" doesn't
> > mean having several voices. it's allways the same single voice but
> > having a diferent basic sound.
>
> Absolutely. Best example is CDPs "brassage(old)" for this kinda
> thing, or waveform software's "amber-x". Even reaktors grain cloud
> can select a different source file per grain triggering event. BUT
> the reason i recommend having one fixed sample/source file for
each
> voice is quite simple: this way i can keep track (and thus
control)
> of which source file is being used by which grain, as a eurorack
> module will not have a display telling me which file is currently
> being referenced by which voice. This is both of use on stage (or
in
> any other "i gotta know my stuff"-performance situation) as well
as
> it (ever so slightly) slightly reduces DSP-load (no switching
between
> different address spaces in memory) and makes constructing
specific
> results more straight-forward and manageable. And you can still
> reference a variety of source files by defining which voice plays
at
> which time and possibly using more than one module for more than
four
> source files simultaneously. But then tha's just me, i don't mind
> buying multiple modules, i even went and bought two vocoder core
> systems for midi-automated stereo filterbanks ;-)
>
> In general it would be smart to keep a *realtime*, near-to-zero-
> latency granular processor as dumb (or "mechanical"/"close to
> hardware" )as possible, because every additional layer of software
> makes timing more and more inaccurate (or at least it steadily
> becomes more challenging to code the algorithm in such a way that
the
> order of things is right) on financially viable DSP setups. If
you'd
> want to do it perfect, you'd build it discreetly with counter-ICs
for
> memory pointers and only supply these with offsets and reset
values
> generated in higher level software as these values are only
generated
> once per grain. You'd simply clock the playback and D/As off an on-
> board or external VC-clock (like in the BBD), skipping the common
> system clock/sample rate and hence avoiding aliasing when doing FM
> (as you'd do this at the clock, not in the grain module's memory).
> BUT this approach would basically draw tons of current and require
a
> lot of ICs and thus a rather complex board layout etc, making it
> rather unaffordable and large ;-)
>
>
> >
> > the grains are "riden" in realtime from the basic "source file" at
> > the point of a "moving cursor" (sort of). (a totally agree that an
> > external lag processor could be used if the "cursor" is
controlable
> > via external cv)
> >
> > the audio signal "contained" in a grain is constantly changing
> > depending on where of the "source file" the grain is being taken
and
> > depending on how much feedback+delay is aplied to the grains.
> >
> > only when a "freeze" function is used the grain's audio content is
> > totally stabilized.
> correct except for that feedback+delay+"freeze"is typically not
found
> in *generators* but in *processors*, because:
>
>
> >
> > it makes absolute sense to be able to refeed the grains with their
> > output signal with some possible delay so timbre can be alterated
in
> > many ways. if pitch pitch changes are aplied to the grains or to
the
> > main "source file" many great sounds can be obtained (from pitch
> > shifts to vapourous sounds, time streching, reverbs or infinite
kinds
> > of digital glitch effects)
>
> a *generator* takes grains from files, a *processor* takes grains
> from a writable memory, as in a delay module or pitch-shifter.
While
> the memory content is static in case of a generator (and you'd
want
> it to be to be able to take the "colour" of a *specific* source
file
> and bring it in to your new sound), the memory content of a
> *processor* is dynamic....the latter could be triggered to record
> from an input (including a delayed and fed-back version of the
> processor's output) and would reference *this* when generating
> grains, instead of a file. A "freeze" function simply sets this
> memory to be un-writable, so it effectively turns into a static
> content as you would have hwen using a file. You wouldn't be able
to
> have repeatable and clearly determined results with a *processor*,
> and you wouldn't have the organic uncertainty with the
> *generator*....hence the differentiation between the two, as they
are
> used for quite different applications. One is deterministic/
> constructivistic, one is heuristic.
>
> I fully agree that delayed feedback is of great use in a
*processor*.
>
> In my suggested module the *processor*-behaviour could simply be
> implemented with a memory-writer add-on module (except for the
> feedback, which you would have to patch manually).....all this add-
on
> would do is provide the generator module with sound that the mem-
> writer records, which the generator would use instead of a loaded
> file. so you'd swap the static file for a writable memory. in this
> case creating a "delay" is simply creating an offset between the
> writing pointer's and the reading pointer's position (which is
> exactly how a delay is implemented). "Freezing" would simply be
> achieved by blocking the "record" control (with an inverter and an
> AND gate, to be specific ;-) )
>
> l8a,
> d
>