Generating Black Metal with JavaScript

Table Of Contents

  1. Introduction
  2. The Problem Of Generative Music
  3. A Solution: The Web Browser
    1. The State Of Web Audio
    2. JavaScript
    3. The Power Of Open Source
    4. Tone.js
  4. Black Metal
    1. Aesthetics and Trve Kvlt
    2. The Music Behind The Kvlt
      1. Drums Are Not Important
      2. Riffs, Riffs, Riffs
  5. A Disappearance Into Anonymity
  6. Coding The Kvlt
  7. Conclusion

Footnotes

Introduction

In this paper I will document my practical master thesis about generative Black Metal music with JavaScript. My artistic work stands on three pillars which will be discussed here: Generative Music, Web Audio and Black Metal.

When talking about Generative Music there are some questions that need to be answered. Some are conceptual: What does listening to Generative Music mean? What media can we use to make Generative Music? Others are more practical: Where and how can we create environments for different kinds of Generative Music? These are questions that have been - and still are - important to me as an artist and I will argue that using the web and more specifically JavaScript is my way of answering these questions. I will also argue that the web, probably the most important technology of our time, has not yet been used to its full potential regarding Generative Music. Furthermore I will give a glimpse into the ethical debate on why it is important to use free software as artists. I will then explain why I frame my work within the Black Metal genre and I will break down its quintessential elements that I used to build my Generative Music system upon. I will discuss why the generative approach is a logical continuation to a genre that has spent the last two decades dehumanizing its protagonists. Lastly, I will give an insight into my code and show how I transform these elements to create my individual approach to Black Metal music.

I will provide background information to these topics and show how I reflect them in the process of creating this work.

The Problem Of Generative Music

Since the beginning of my studies I was fascinated by Algorithmic Composition and Generative Music. The possibility of taking my personal preferences out of the composition process and creating algorithms to execute abstract ideas and thus create music that no human being would write seemed incredibly intriguing to me - and I believe it is one of the driving forces behind this kind of music. But it’s not all sunshine and rainbows in the world of Generative Music. I soon realized there was a big problem with this genre.

From a historical point of view it was not possible to reproduce Generative Music adequately until recently. There used to be three options: concerts, records or installations. None of these possibilities can capture the essence of Generative Music. When Hiller and Issacson created the Illiac Suite - the first piece generated by an electronic computer - of course there was no way for a listener to execute the system himself, since only a few big universities had computers which were very limited in performance. Even if you would have had access to a computer you would still have needed an orchestra to perform the piece. Listening to the piece was only possible by attending a concert or buying a record. This is of course contrary to the essence of Generative Music. It’s written right there in the name: “generative”. Conceptually speaking, I would claim that listening to a generative piece does not mean reproducing one instantiation of it, but rather executing the generative system itself and listening to possibilities being instantiated by the system in real-time. This means we need media that is able to create and reproduce a piece while listening.

As computers became more affordable and increasingly powerful, it became possible to generate music in real time and Generative Music found its way into exhibitions. So I would argue that one would actually be listening to Generative Music as it should be understood conceptually. But there’s still a problem: The exhibition setting also prevents the listener from repeatedly hearing the piece, as does the concert and the record. On top of that an exhibition might not be the place where you want to listen to certain types of music. I was always bothered by that when I was going to exhibitions but also when exhibiting my own work. When listening to other peoples pieces I usually found myself in a room with people walking in and out, bad acoustics and no time to get involved with the piece and building a relationship as I would with my favorite records that I listen to over and over again. When I presented my own pieces I couldn’t help but imagine people feeling the same way, and since my artistic work is heavily influenced by Black Metal, a kind of music that is predestined to be listened to in solitude, this kind of exhibition setting was always a problem for my work and never a solution. But more on that later.

My perspective on Generative Music is a technological one, rather than a historical one. My interest in technology dates further back into my youth than my interest in music. At some point it became clear to me that my goal as an artist is to reflect the way contemporary technology is used, or not used, in my art. If I want to use digital technology to create music in a way none of its alternatives can, Generative Music is a fruitful endeavor. I also became more interested in the open-source and free software movement as well as digital sustainability. When I see constructions like the Klangdom at ZKM I ask myself if it is really justified to use so many resources to make music for a few privileged people? This all led me to the conclusion that I have to make Generative Music available on the web, art that everyone can listen to with the technology they already own: a smartphone.

A Solution: The Web Browser

“For the first time ever, the devices people use to listen to music are also capable of executing Generative Music systems.”1 As Alex Bainter - one of the few artists currently working with generative, ambient music in the web - states it, we reached a turning point in how we can make Generative Music systems. The web influences our lives significantly in many areas, it seems, but Generative Music has not yet taken its place within it. Let’s take a look at the technology that makes it possible:

The State Of Web Audio

Boris Smus gives a brief overview of the history of Web Audio, in his book “Web Audio API: Advanced Sound for Games and Interactive Apps”: In the beginning there was the <bgsound> HTML tag, which allowed to play back audio upon visiting a website. This was a feature limited to the Internet Explorer. It was imitated by Netscapes <embed> element but never standardized. After that came Flash, which brought the possibility to play back audio with cross-browser support, with the drawback of requiring third party software (a plug-in) to run. And let’s be honest, I have no good memory about flash plugins and I think a lot of people feel the same way.2 More recently HTML5 introduced <audio> tag. It was better than Flash, but not good enough, at least for more complex applications. Boris lists a few reasons why:

  • No precise timing controls
  • Very low limit for the number of sounds played at once
  • No way to reliably pre-buffer a sound
  • No ability to apply real-time effects
  • No way to analyze sounds3

So essentially the browser was as capable as a Nirvana cover band by a bunch of 12 year old kids. Jokes aside: These features are indispensable for artists working with sound.

In 2011 the Web Audio API was released and it became possible to do a whole lot more in the browser: “The goal of this API is to include capabilities found in modern game engines and some of the mixing, processing, and filtering tasks that are found in modern desktop audio production applications. The result is a versatile API that can be used in a variety of audio-related tasks, from games, to interactive applications, to very advanced music synthesis applications and visualizations”, says Boris. 3

If we look at the first W3C (World Wide Web Consortium) Editor’s Draft of the Web Audio API, we get a feeling of what we’re dealing with here. At the top of this document is a section about the status of said document: “This specification defines a proposal for an audio processing and synthesis API for use in client-side user agents (e.g. a browser). This document is accompanied by an alternative proposal, the MediaStream Processing API, and an umbrella document outlining the relationship between these proposals. This document was published by the Audio Working Group as a First Public Working Draft. This document is intended to become a W3C Recommendation.” 4 There are two things that strike me as important here. The first one is that we are talking about an “API for use in client-side user agents”. This means that whatever we develop with said API will be executed client side, meaning in the browser on the device that is viewing the web page. This can be interesting for artists because it allows individual musical environments for every listener. Up until now, there is the possibility to generate music on a server and stream it to the client, resulting in all clients listening to the same music at the same time. In my journey to generate Black Metal this was something that I specifically didn’t want to do, I prefer the individual approach, but again I am getting ahead of myself…

The second interesting sentence in this short paragraph is the following: “This document is intended to become a W3C Recommendation.” This means that W3C is proposing that this API should become standardized and thus available cross-browser.5 Luckily for anyone interested in Web Audio this actually happened and the Web Audio API is supported in all major browsers.6

If we look at the features implemented in this proposal we see the following list:

  • Modular routing for simple or complex mixing/effect architectures, including multiple sends and submixes.
  • Sample-accurate scheduled sound playback with low latency for musical applications requiring a very high degree of rhythmic precision such as drum machines and sequencers. This also includes the possibility of dynamic creation of effects.
  • Automation of audio parameters for envelopes, fade-ins / fade-outs, granular effects, filter sweeps, LFOs etc.
  • Processing of audio sources from an <audio> or <video> media element.
  • Audio stream synthesis and processing directly in JavaScript.
  • Spatialized audio supporting a wide range of 3D games and immersive environments:
    • Panning models: equal-power, HRTF, sound-field, pass-through
    • Distance Attenuation
    • Sound Cones
    • Obstruction / Occlusion
    • Doppler Shift
    • Source / Listener based
  • A convolution engine for a wide range of linear effects, especially very high-quality room effects.
  • Dynamics compression for overall control and sweetening of the mix
  • Efficient real-time time-domain and frequency analysis / music visualizer support
  • Efficient biquad filters for lowpass, highpass, and other common filters.
  • A waveshaping effect for distortion and other non-linear effects.4

If we take a look further at the latest draft of the Web Audio API from 2018 we can see that over time the following features have been added:

  • Flexible handling of channels in an audio stream, allowing them to be split and merged.
  • Processing live audio input using a MediaStream from getUserMedia().
  • Integration with WebRTC
    • Processing audio received from a remote peer using a MediaStreamTrackAudioSourceNode and webrtc.
    • Sending a generated or processed audio stream to a remote peer using a MediaStreamAudioDestinationNode and webrtc.
  • Oscillators (added 20137)8

So not only offers the Web Audio API most of the advanced audio processing capabilities we find in traditional music software, the very core of it has been there since 2011, with oscillators added in 2013. They also choose a modular approach to working within this API that is very similar to the concepts we know from modular synthesizers, Max/MSP or Pure Data.

So with all that given, how is it possible that we see so little done with this technology in the field of Generative Music?

I believe there are two reasons. The first one is based on the client-side nature of the API. Although client-side applications are a great thing, they are dependent on the clients hardware, so if e.g. a smartphone doesn’t have the processing power to actually run the audio properly, a piece can be ruined. While trying out different demos this has only happened to me with websites that also incorporated 3D graphics and spatialized audio, so I feel like this isn’t really a problem anymore today, at least not with audio-only applications. So if this problem is solved, what is the reason? To explain this we have to take a detour and talk about JavaScript and open source software…

JavaScript

JavaScript, which - according to the legend - was created by Brendan Eich in only 10 days and released in 1995 has become probably the most popular programming language ever.9 In Stack Overflows yearly published extensive developer survey JavaScript has topped the category “Most Popular Programming, Scripting, and Markup Languages” for seven years in a row.10 After Netscape implemented Java plug-ins into their browser, they decided to come up with a scripting language that would make it possible to implement small client-side tasks to make web pages more dynamic complementing Java, which was intended for programming complex, enterprise-sized web components. But JavaScript had to come a long way, because of the short period Brendan Eich had to create the language in, many things had to be fixed and upgraded later on. There were also several parties that wanted to go in different directions with the language which led to different versions which of course resulted in problems with standardization (or in other words, the dark ages of JavaScript). But in 2009 with the release of the ECMAScript 5 standard (ECMAScript is the name of the standard which JavaScript follows) the community found its way back together and the language started to flourish. This is partly owed to the circumstance of JavaScript being the only scripting language for the web for a long time, and with the popularity of the internet grew the popularity of JavaScript. The other reason JavaScript has grown so much is through its community. What started out as a simple scripting language to make websites a bit more dynamic exploded into a programming language that can be used to do almost anything. With Node.js it became possible to use JavaScript even as a server-side programming language, frameworks like React (backed by Facebook) or Angular (backed by Google) allow for developing complex web and native apps, Gpu.js allows JavaScript code to run on graphics cards, Espruino runs JavaScript on microcontrollers, Three.js and P5.js (backed by Processing) can be used to program 2D & 3D visuals and Tone.js provides tools for making interactive music. These are just a few examples of course but you can see where I am going with this. What is appealing to a lot of people is that JavaScript can be used for everything and if you are programming a web application you can use it for the UI, database, server-side and logic. The same thing of course applies to artists: I can use the same language to program audio, user-interface, visuals and more. But it’s not just that: All these tools I mentioned are open source.11

The Power Of Open Source

First of all, there is a difference between open source software and free software. The terms describe mostly the same thing but from different perspectives. “To use free software is to make a political and ethical choice asserting the right to learn, and share what we learn with others. Free software has become the foundation of a learning society where we share our knowledge in a way that others can build upon and enjoy.”12 While for the free software movement the use of free software (they refer to free as in “free speech” not as in “free beer”) is a political and ethical action, the open-source approach looks at this from a practical point of view and does not condemn the use of non-free software. I myself am more of a free software kind of guy so I will go down that route in this chapter. “But the chapter is called the power of open-source!” you say? That’s true, I used this term to lure you in and talk about free software and there’s nothing you can do about it. Note also that almost all open-source software is also free software.

A program must meet four criteria to be considered free software:

  • The freedom to run the program as you wish, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
  • The freedom to redistribute copies so you can help others (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

“We campaign for these freedoms because everyone deserves them. With these freedoms, the users (both individually and collectively) control the program and what it does for them. When users don’t control the program, we call it a “nonfree” or “proprietary” program. The nonfree program controls the users, and the developer controls the program; this makes the program an instrument of unjust power.”13

So where am I going with this? Software has become a very important part of our lives and this raises a number of ethical questions. I will not go too deep into this but if you have ever wondered what google does with all your data and whether or not this is right, you get what I’m talking about. Free software prevents this by giving the community the possibility to look at what the software actually does and making the adjustments to prevent unwanted behavior and distribute a better version of said software. Pretty neat huh?

What does this mean for art? I would personally argue that if we want our art to be free, we should use free tools. I am going to give an example: Lets look at the proprietary software Max/MSP and the open-source community driven PureData. I appreciate Max/MSP, because it was my way into audio programming, but let’s leave emotions out of this. How is it possible that PureData runs in the web, on a Raspberry Pi, on Android phones and in game engines like Unity, while Max/MSP basically needs a powerful MacBook in order to function? Since PureData is open source, one can write a compiler to translate PureData into Web Audio API functions for example (WebPd). From my knowledge of years on the Cycling 74 forums there once was a time where you could make VST plug-ins from Max/MSP pretty easily, but this was removed. If I can turn my patch into a VST and send it to my friend, why would he need a Max/MSP license? “But now they have Max for Live” you say. This is going to lock me into the proprietary Ableton eco-system. And all of that is again going to lock me into the Windows/Apple eco-system and it ends up with you crying on your 3000$ MacBook Pro because Apple soldered that thing together to the last screw and you can throw it out after two years. This is kind of a rant and I don’t want to accuse Cycling 74 of being a bad company, I just tried to show the power dynamics that are involved here. Also a software like PureData gives you the opportunity to study and learn every bit of code it uses, how DSP and musical principles can be implemented, empowering you to also contribute to the software.

This only gets worse with software as a service becoming the new standard. Who owns my ProTools project when I stop paying for a monthly license? It can’t be me, because I cannot open it anymore until I start paying again. The only thing I still own is a render of the piece I made using that software and to me this seems highly problematic.

And to be clear, not all JavaScript is free, but we do get a choice there. We get the option to use these free tools and to make our art in return free. Again, we are talking about freedom not about price.

Tone.js

If you have read this text carefully, you will notice that i have not yet answered the question at the end of my chapter on the Web Audio API. Why do we see so little Generative Music in the browser? Well the Web Audio API isn’t exactly easy to understand and a number of frameworks arose to simplify certain aspects of it, including Tone.js - the framework I will be using for my piece. The first commit to this project was back in 2014.14 But for a framework to grow, to become stable and powerful, it takes time. For artists to pick up on these tools and learn how to use them takes time. Tone.js specifically provides functions and classes to make interactive music with the Web Audio API and this makes it very accessible to sound artists. It implements relative timings, transport, loops, patterns, synthesizers, samplers and more tools we already know and use in our work. What needs to grow now is the community. If I run into a problem with Max/MSP, chances are someone already solved this problem and wrote about it in the forum. With frameworks like Tone.js these communities have yet to grow and that is also a reason why I made my source code available on Github for other artists interested in this technology. I have tried to incorporate JavaScript into my work for roughly a year now and I understand the struggle of not coming from a programmers background. Now that we have covered the foundation of my artistic approach to Generative Music and technology, it’s time to get serious and talk about Black Metal.

Black Metal

Black Metal originated in Norway during the late 1980s. The main concepts behind this genre were from the very beginning misanthropy, anti-mainstream and anti-commerce. One of the main reasons that led to the creation of this genre was the dissatisfaction with what Death Metal had become. Musically Death Metal was becoming too technical, aesthetically too over-produced and societally too popular. Black Metal in its beginning was about artistic integrity and going against the mainstream, creating sound that was anything but pleasant to the listener. The genre was also relying heavily on satanic imagery and lyrics. While most of the core protagonists state that they were not practicing satanism as a religion, Christianity was so deeply inclined in the Norwegian society that rebelling against it would eventually mean rebelling against the whole society and its values. Fenriz from Darkthrone states that while not being satanists they also weren’t atheists and that he believes that if there weren’t some believe in it this music would not have been created. The people involved in the creation of this music were very extreme in their approach to music, but it did not stop at making music. Since being real - this was in the Black Metal scene referred to as being “trve kvlt” - was very important, a lot of bands took this realism one step further. Some musicians would go on to harm themselves on stage, while others would go on and burn churches all around Norway.

Black Metal holds a special place in my development as an artist. It was my first contact with music that I would call conceptual. On the one hand in its aesthetics and production as an antithesis to mainstream entertainment music, on the other hand as a genre that has a - albeit unspoken - set of rules for compositional means, whose understanding acts as a gatekeeper against the potential listener. The search for authenticity, which is central to the genre, and the associated idea of wanting to do everything by yourself, have not only led me to take composing seriously, but have also led me to learn more instruments and audio engineering. The more I dived into the musical and sonic extremes of this genre, the more difficult it became for me to find a comparable musical experience. I have been composing and performing Black Metal for a decade now and all my artistic work is influenced by it. I see this work as a tribute to and reflection of this genre.

Aesthetics and Trve Kvlt

Because Black Metal originated as a counter-movement to Death Metal and was all about anti-mainstream, anti-commerce and misanthropy, the music was - and to some still is today - very exclusive and elitist. This formed the concept of “trve kvlt”. While this term was heavily overused and became kind of a meme, I still believe that the sentiment behind it is interesting. The term describes kind of an artistic integrity that needs to be held to be considered true Black Metal. This led to an importance of a DIY attitude within the genre. A great example for this is the band Darkthrone, one of the most influential early Black Metal bands. They started out as a Death Metal band and signed a record deal with the well-known british label Peaceville Records. Their first record was a well received, quite technical Death Metal album with what then could be considered modern production. Since they didn’t have enough money they had to compromise on their sound for example by using electronic drums. A compromise they would never make again. They recorded their second album A Blaze In The Northern Sky in the studio and tried to get their sound as raw as possible, inspired by bands from the early 80s. Lets listen to the difference between their first album Soulside Journey and their second album A Blaze In The Northern Sky:

Nor The Silent Whispers - Soulside Journey - Darkthrone
1991
Kathaarian Life Code - A Blaze In The Northern Sky - Darkthrone
1992

Because of this production style the label initially refused to release the album asking the band to remix everything, but when the band threatened to leave the label, they released it anyway hoping that nobody would care about this album. During the production of their next records they started doing everything themselves with a four track recorder, perfecting their sound by the time they released their Transilvanian Hunger album, one of the most important in the genre.

Transilvanian Hunger - Transilvanian Hunger - Darkthrone
1994

Later, a lot of bands tried to outdo themselves in bad production quality, but in my opinion they completely missed the point. There is a tradition of Black Metal bands that act only as studio projects. Darkthrone was one of the first ones to go down this path, refusing to play live shows.The cliché of the rock star performing in front of millions of fans, idolizing him to god-like proportions is not something that was very appealing to the misanthropic protagonists in the gerne. There is also an argument to be made for Black Metal not being live music: Black Metal deals a lot with loneliness and individualism, not exactly something that can be expressed in a live show and the production quality plays an important role in this matter. Because of the implied authenticity of these records a physical quality of the music remains. The important thing that needs to be achieved is to make it perceptible to the listener that the recording was not heavily edited, but performed as heard. If you listen to the intro of Mother North by Satyricon for example, you can hear how the drums have these loudness bursts and that they lack behind in tempo as the drummer is trying to keep up with the very high tempo.

Mother North - Nemesis Divina - Satyricon
1996

This transports the physical energy that is put into this music so that it can be experienced by the listener. In other words if I hear a five minute blast beat I know that the drummer is sweating behind his drum set trying to keep up. Let’s hear Darkthrone drummer Fenriz describe it in his own words:

In Black Metal there is a lot of hate against “modern” technology, like for example triggered drum sets. If you look at it from this point of view I think it makes sense. Listen to the drums in “The Sacrilegious Scorn” by the band Dimmu Borgir - who are often accused of making Black Metal mainstream: In my opinion the whole record is so polished that there is no energy to it, it lacks the physical experience when listening to this music.

The Sacrilegious Scorn - In Sorte Diaboli - Dimmu Borgir
2007

This is one of the things that separates the wheat from the chaff in Black Metal, because it shows whether or not a band actually understands the concepts behind their genre. I would argue that this concept also diversifies the genre, because it forces bands to create more than just a series of notes. They create everything from start to finish, music, production oftentimes even artworks and if a band actually has a vision it allows them to pursue it without compromise. It also inspires young bands because they can achieve this without having some big label behind them. The cover art of “A Blaze In The Northern Sky” came to define much of the aesthetics of Black Metal because the band did it themselves, with the tools that they had. They didn’t want to hire some artist to paint an artwork like all the Death Metal bands did, they just took a picture of one of the band members and made it black and white, as simple as that.

A Blaze In The Northern Sky

In my opinion the thing that Black Metal has yet to embrace is that modern technology isn’t bad: It’s about how it’s used. That’s what I aim to accomplish with my piece: Use modern technology in a way that actually adds value to the Black Metal genre. It was still important to me to work with the tools I have. I recorded my own musical material, developed and implemented my ideas, concepts and code on my own - with the help of my mentor Cedric Spindler - and I try to include as few steps as possible into the mixing process. I’d rather work with the distortion implemented in the Web Audio API than going to lengths to make the guitar sound like on every other album.

The Music Behind The Kvlt

In the last chapter I covered the aesthetic aspects of Black Metal that are important to me, in the following I will focus on the music itself. I will approach this the same way I did in my practical work by looking at drums and guitars individually. But there’s a few things to say in general about Black Metal music. As I described in the last chapter Black Metal was a counter-movement to Death Metal. Death Metal was becoming more technical and progressive, Black Metal in turn took the opposite direction. Monotony and simplicity are a key to the genre and are key to creating the atmosphere that Black Metal is known for. In the following sections I will describe how I approached the music from a programmers point of view.

Drums Are Not Important

“In Black Metal drumming is not important.” Of course that is quite a bold and - to many Black Metal drummers - provocative statement, but the sentiment behind it is very interesting. In my opinion drums being not important is a very important part of Black Metal, if I dare add to the confusion. So what does that mean? Black Metal is particularly defined by a few very specific drum patterns. Let’s look at a few examples:

We already heard Darkthrones Transilvanian Hunger in the last chapter and it features the iconic blast beat that is heavily associated with Black Metal.

blast beat

This beat is heavily used by all Black Metal drummers and covers about 90% of the drum parts on the whole Transilvanian Hunger record. It can either be played this way or starting with snare and hi-hat followed by the bass drum.

On most slower Black Metal tracks you would find a beat like this:

slow beat

You can hear this for example in Burzums Dunkelheit which features this exact drum beat for almost seven minutes straight:

Dunkelheit - Filosofem - Burzum
1996

So now, what if I tell you that’s all there is to Black Metal drumming? Consequently, that’s the way I went about programming the drum parts. Of course there is more to Black Metal drumming but to a great extent it can be described through these two patterns. Let’s look at another classic Black Metal track, Freezing Moon by Mayhem:

Freezing Moon - De Mysteriis Dom Sathanas - Mayhem
1994

And now let’s look at the drum part:

freezing moon

This pattern follows the same principle like the one from Burzum. It could be boiled down to a kick and a cymbal on the first beat then a cymbal on the second and finally on the third a snare and a cymbal. So one could build these beats by having a kick and a cymbal, followed by x amount of cymbals, followed by a snare and a cymbal, followed by y amount of cymbals. Everything in between like small bass drum patterns or additional cymbals can be viewed as permutations of this simple formula that can be added. Normally x and y are equally spaced but on odd time-signatures x usually takes more spaces than y.

You can observe this even with more experimental drummers like Darkside from Mgła:

The cymbal patterns are not as much about complex rhythms, but more creating an almost melodic layer, while the very simple back beat that was established throughout the piece is never abandoned. This is what Fenriz refers to as “the drums are just supposed to be there”. Drums in Black Metal are not there to introduce complex rhythms, groove or swing. This is also why drummers often stick to a certain pattern during the whole songs or big parts of it, monotony is key in Black Metal and is often kept through these rhythmic patterns. If you think back to the video where Fenriz talks about drum sound, he says that while drummers became more technical in Death Metal “no one wanted to take one for the team” so they sacrificed the kick drum sound in order to make every element perceptible. Black Metal is exactly the opposite: It’s not about showing off who is the best drummer, who can play the fastest guitar solos, who can cover the most octaves with his voice. If done well all elements in a Black Metal track are perceived as equal, at least in my opinion. That’s why it is important to stick to these patterns and try to find ways to build on them, rather than to mix different patterns and rhythms, because simplicity and repetitiveness are such an important aspect to the Black Metal ideology that by abandoning this style of drumming in favor of a more progressive one, the music is no longer perceived as Black Metal. I would argue that this holds true in some forms of electronic club music or even minimal music as well. To close the circle, this explains why Fenriz - from a drummers point of view - says: “In Black Metal drumming is not important.”

Riffs, Riffs, Riffs

When it comes to Black Metal riffing, two people can be named as highly influential: Euronymous from Mayhem and Blackthorn from Thorns. The Live In Leipzig album from Mayhem which was officially released in 1993 but traded in the scene since 1990 and the Grymyrk demo-tape from Thorns featuring only guitar and bass were the two records that defined the whole genre. I will first talk about the style that they introduced and later more about harmony.

Barre Chords

One of the most important styles in Black Metal is the use of barre chords. I would argue that in other forms of Metal the prevalent concept is that of lead and rhythm guitar. The rhythm guitar would play power chords and the lead guitar would play harmonies and melodies on top of that. This allowed for a fairly organized sound even while using distortion. I think Black Metal is more about blending those two elements together into one and barre chords are perfectly suited for that. Often guitar players would arpeggiate over those chords while letting the strings ring out creating more of a texture of overlapping notes. This technique is extensively used with minor chords with chromatic changing roots. Probably the first ones to use this technique very primitively were Mayhem with their song Freezing Moon:

Freezing Moon - Live In Leipzig - Mayhem
1993

But it was Thorns who further developed this style and used it heavily on their Grymyrk demo-tape:

Home - Grymyrk - Thorns
1991

This style is also a key element to the sound of Burzum which further popularized it:

Jesus' Tod - Filosofem - Burzum
1996

I believe that this style of playing is a very important contribution to the atmosphere that Black Metal is known for.

Tremolo Picking

Tremolo picking is to Black Metal guitar playing what blast beats are to its drumming and the two almost always occur together. As stated before, in Black Metal all elements are often perceived as equal and this is often achieved by applying the same principle to all elements to form the wall of sound. You can hear this style of playing on almost any Black Metal album and the technique is probably the one that is most often associated with Black Metal guitar playing. While Mayhem used short parts of tremolo picking even on their 1987 Deathcrush album it was not until Live In Leipzig when it became an important part in their compositions:

Burried By Time And Dust - Live In Leipzig - Mayhem
1993

And Thorns again would combine this with their minor chord heavy style and further develop it:

Home - Grymyrk - Thorns
1991

Dyads

Another playing style that has emerged from the Black Metal genre is playing dyads. I see this as kind of an evolution of the power chord, as it was fairly common to play two notes together and Black Metal expanded on that concept. On of the first appearances of these kinds of chords is probably in this part of Freezing Moon by Mayhem where in one instance the power chord was abandoned in favor of playing a minor third dyad:

Freezing Moon - Live In Leipzig - Mayhem
1993

freezing moon

But it was Darkthrone who heavily expanded this concept creating what is sometimes referred to as “finger moving riffs”. The most famous appearance of this style is in the intro riff to Transilvanian Hunger:

Transilvanian Hunger - Transilvanian Hunger - Darkthrone
1994

transilvanian hunger

In this style often a common note is kept between alternating dyads while the other finger moves, which is where the name probably comes from.

Harmony

While these styles of playing were rather straightforward to program, harmony was a bit tricky at first. As a music student my first approach was naturally to rely on the music theory I had learned during my studies. At first I thought of using scales as a base, then building chords from these scales. But going over some tracks trying to apply my concept to them I quickly realized that it wouldn’t work: Black Metal is very much based on rejection if not negation of mainstream music and this can be found on a compositional level. While Black Metal follows some concepts of scales, diatonic and numerals, it breaks with them at seemingly random points. While I was going over different songs, conditions to implement in my code kept piling in my head with no end in sight. This isn’t what Black Metal stands for, Black Metal is simple. There had to be a simple concept. So I tried thinking about how these songs were probably written and suddenly it came to me. If you want to think in Black Metal, you have to think in shapes. There are certain chord shapes on a guitar that work with Black Metal and certain shapes that don’t. I challenge you to find a major triad chord in a classic Black Metal song. I didn’t. What I found were minor chords where major chords would occur or at other occasions sus2 chords. So I developed a new concept in which a number of chord shapes are described relatively to a root note, which is chosen randomly. This gives me the opportunity to change harmonizations of root notes as seen in the finger style riffing, to generate chromatic progressions of minor chords, to build scales by combining different shapes and to combine all of these methods with each other. I think this gives me a lot of freedom to generate different riffs and allows me to create new combinations from the smallest elements of Black Metal composition.

A Disappearance Into Anonymity

In this chapter I want to talk about more recent developments in the Black Metal genre and where I see my work fit into that. In a previous chapter we have looked at the aesthetics of Black Metal, but we haven’t talked about one important thing: Self-staging. One thing that made Black Metal stand out from other forms of extreme Metal was how the artists presented themselves. Introduced to the genre by Mayhem vocalist Dead most artists wore “corpsepaint”. Black and white face paint that Dead used to make himself look like a corpse. This was adapted by the whole scene and became an art form itself. It was used as a ritual by the musicians to get into a certain state of mind. “When we (…) are using corpsepaint, we are usually in a state of mind that makes us feel like we are getting nearer darkness” said Faust from Emperor once in an interview.15 And it seems that there was something to that. When Black Metal became focus of international mainstream media after a series of suicides, murders and church burnings, corpsepaint became the face of Black Metal. The fictive characters these musicians created for themselves entered the real world as a hostile force of destruction. And while interest in the genre got bigger, so did the mythology around these characters. But as Black Metal become more popular, bands got signed by mainstream labels, production quality improved, the once hostile symbol of the corpsepaint was turned into a caricature of itself. New bands started to open the music up to industrial, hard rock and film music influences and went on stage in fancy costumes. Black Metal went from performance art to theater. Corpsepaint was no longer a ritual, it was an accessory. Black Metal was no longer dangerous, it was in the charts. Some bands, like Dark Funeral or Marduk, tried to uphold the image the genre once had by trying to be more and more extreme in their glorification of satanism and violence, which worked maybe for a few shocking headlines but very fast became very embarrassing. But it was not until the french band Deathspell Omega came along, that Black Metal regained it’s seriousness. The french band deconstructed the personality cult so many other bands tried so desperately to achieve, by disappearing into anonymity. The change was soon to be found in their music too, deconstructing traditional Black Metal in favor of more complex and dissonant music, incorporating elements of jazz and new music. Presenting themselves in interviews as reflective and intellectual, they managed to regain seriousness into their music, which they saw as a new level of metaphysical struggle.16 This was picked up by polish band Mgła and also some bands within the icelandic Black Metal scene. While they combined the progressive style of Deathspell Omega with more traditional Black Metal they also brought anonymity to the stage, hiding their faces with black cloths, barely moving during performances. This became a trademark of the avant-garde in the Black Metal genre. It is an evolution of the dehumanization that was also part of the corpsepaint but taken one step further. That’s where I think Generative Music allows us to go even further. While Deathspell Omega tried their best to stay anonymous, functioning only as a studio project, there are still people behind it playing and composing this music. Generative Music manages to eliminate the human part to the highest possible extent, leaving only the programmer. The music becomes autonomous and there are no longer protagonists that need to be dehumanized. I see my work as an evolution of what the Black Metal avant-garde tried to accomplish. I also believe that this dehumanization leads Black Metal out of it’s very controversial past, replacing the human emotion that has lead to crimes and right wing propaganda with a computer.15

Disclaimer Black Metal is a very controversial music genre in many different ways. What is most concerning is its ties to extreme right-wing ideology. This was mostly spread within the scene by Varg Vikernes from Burzum in the early 90s. Since the Black Metal underground was consisting of people with very extreme ideas and mindsets, they accepted the right-wing ideology into their genre - some adapted to it, others tolerated it. This inspired other right-winged extremists to enter this scene and there has since been multiple cases of bands using Nazi imagery and vocabulary. I do not condone any of these actions and political views. While most bands remain unpolitical and completely avoid the conversation I noticed a small trend of bands starting to speak out against these politics and I would like to take this opportunity to point out a few: Wolves In The Throne Room, Gråt Strigoi and Feminazgul.

Coding The Kvlt

In this last chapter I would like to give some insight into my code. I focus on the most important concepts, and will simplify the code as much as possible. The full source code is available on Github for anyone interested to go deeper.

In my opinion, the most important element of a composition - whether it is generative or not - is the structure. That’s why I was looking for a method to connect the individual parts of my piece in a musically meaningful way. One of the main concepts in almost any form of music - and in Black Metal - is variation and development. In my piece I accomplish these two things by using a recursive tree system. Let’s take a look at the code:

class Pattern {

    constructor(pattern) {
        this.base_pattern = pattern;
    }

    permute() {
        let randomized_1 = this.base_pattern.randomize();
        let randomized_2 = this.base_pattern.randomize();
        this.pattern_1 = new Pattern(randomized_1);
        this.pattern_2 = new Pattern(randomized_2);
    }

}

The Pattern class is given an input pattern, which is created when the composition is started. This input pattern consists of arrays which hold the drum, guitar and bass note numbers notated in rhythmic patterns. When the function permute() is called, the Pattern class takes this input pattern and calls the randomize() function, which applies a set of permutations to the pattern that we will later look at. Then the permuted patterns are handed over as input patterns to two new instances of the Pattern class, which are appended to the object. This creates a structure that looks something like this:

                                                              ---[variation]--->permute()---etc...
                                                             /
                               ---[variation]--->permute()---
                              /                              \
[base-pattern]--->permute()---                                ---[variation]
                              \
                               ---[variation]--->permute()---

I can then move through this tree structure vertically, horizontally or randomly. With every new branch the patterns get more and more permuted. In the program the class is instantiated like this:

let polytree = new Pattern(
    new PolyphoneSequence(
        generate_drum(length = 4),
        generate_guitar(notes = 3),
        generate_bass(notes = 4)
    )
);

We create an object polytree and hand over the input patterns. The input patterns are part of yet another class called PolyphoneSequence. There are three functions which hand over the basic patterns to this PolyphoneSequence class, namely generate_drums(), generate_guitar() and generate_bass(). These provide the patterns that will later be permuted. This is en example how these patterns could look in this specific case:

// 0 is interpreted as a bass drum note, 1 as a snare drum and 2 as a hi-hat
drums = [[0,2], [], [], [], [2], [], [], [], [1, 2], [], [], [], [2], [], [], []]
// the length is given by the drum part, the amount of root notes that will later be harmonized is specified
guitar = [[25], [], [], [], [], [], [], [], [25], [], [], [], [24], [], [], []]
// follows the same principles as the guitar
bass =  [[25], [], [], [], [26], [], [], [], [25], [], [], [], [24], [], [], []]

The length specifies the amount of 4th notes in a given pattern. Between every 4th note, 3 empty spaces are created that allow for being filled up later on. In this case the length of the pattern is 16 16th notes. It works like a step-sequencer where a counter iterates through these arrays playing the notes at their given positions.

So what happens inside the PolyphoneSequence class? The PolyphoneSequence holds all these arrays. When the permute() function is called inside the Pattern class, a series of different permutation functions is called. For these permutations I think in different levels of notes. I will only give one set of rules for a permutation in the following examples to keep it clearer for the reader.

let randomIndex = Math.floor(Math.random() * this.drums.length)

if (randomIndex % 4 == 0) {
    // first level (4th notes)
    // if snare or bass drum add bass drum 2 before or 2 after
    if (this.drums[randomIndex].includes(DRUMTYPES.BD) || this.drums[randomIndex].includes(DRUMTYPES.SD)) {
        if (Math.random() < 0.6){
            this.add_instr(randomIndex - 2, DRUMTYPES.BD);
        } else {
            this.add_instr(randomIndex + 2, DRUMTYPES.BD);
        }
    }
}

First, a random number is chosen within the bounds of the drum array. Then I check on which level this number is. If randomIndex % 4 == 0 is true, we are on the first level, or on a quarter note so to speak. In the next step I check what is already present on that specific index. In this case I check if there is a bass drum - includes(DRUMTYPES.BD) - or a snare drum includes(DRUMTYPES.SD). If one of these two statements returns the value true then we add a new bass drum at +/- 2 from the current index. This means this permutation on the first level adds an element to the second level. The second level being randomIndex % 4 == 2. If then a permutation is triggered on the second level, an element is added to the third level.

The level structure looks like this:

              x o o o x o o o x o o o x o o o 
first level:  |       |       |       |
second level:     |       |       |       |
third level:    |   |   |   |   |   |   |   |

What makes this interesting is that these permutations always build on the elements that are already there in a way that makes musically sense. Another reason I chose to do this is that because of the way generate_drums() adds empty spaces between its notes, these levels will always be present in every pattern.

What is missing now is the possibility to have permutations above the first level. And these permutations always occur at one random index, so there’s no way to introduce repeating patterns of permutations. That’s why I included two more forms of permutation that happen before this one.

const bd_add = 0.4; //probability to add bass drum
const bd_remove = 0.2; //probability to remove bass drum
const sd_add = 0.3; //probability to add snare drum
const sd_remove = 0.2; //probability to remove snare drum

for (const step of this.drums.keys()) {
    if (step % 8 == 0) {
        (Math.random() < bd_remove)? this.remove_instr(step, DRUMTYPES.BD) : undefined;
        (Math.random() < bd_add)? this.add_instr(step, DRUMTYPES.BD) : undefined;
        (Math.random() < sd_remove)? this.remove_instr(step, DRUMTYPES.SD) : undefined;
        (Math.random() < sd_add)? this.add_instr(step, DRUMTYPES.SD) : undefined;
    }
}

This iterates through the array and, in this case, on every 8th step there is a chance that an element is removed or added. This introduces the possibility to add and/or remove elements outside of the first level. It also allows me to add elements that are not dependent on other elements that are already present. This will in return introduce a greater probability for more permutations if elements are added and for less permutations if elements are removed.

if (Math.random() < 0.5) {
    for (const step of this.drums.keys()) {
        if (step % 8 == 4) {
            this.remove_instr(step, DRUMTYPES.HH)
        }
    }
}

This code block allows me to introduce a pattern of permutations that repeats over the array. In the last example we first chose the index and then applied a probability if a permutation occurs. Here we first apply a probability and then execute the permutation on every repeating step. In this case we would remove the hi-hat on every second quarter note, giving the pattern a half-time feeling.

These examples demonstrate how rhythm is generated in this work. The guitar and bass rhythms follow similar ideas, so let’s look at how harmonic content is organized.

As we saw above, the base pattern for guitar and bass only contain root notes, these root notes can then be harmonized from a set of chord shapes that looks something like this:

chord_templates = Array(
    { type: 'power', shape: [0, 7] },
    { type: 'dyad', shape: [0, 8] },
    { type: 'dyad', shape: [0, 5] },
    { type: 'dyad', shape: [0, 3] },
    { type: 'dyad', shape: [0, 2] },
    { type: 'dyad', shape: [0, 1] },
    { type: 'dyad', shape: [0, 10] },
    { type: 'triad', shape: [0, 7, 14] },
    { type: 'triad', shape: [0, 7, 15] },
    { type: 'barre', shape: [0, 7, 12, 15, 19, 24] },
    { type: 'barre', shape: [0, 7, 12, 14, 15, 24] },
    { type: 'barre', shape: [0, 7, 12, 14, 17, 24] },
);

Inside the PolyphoneSequence chords are harmonized like this:

generate_harmony() {
    //harmonize base pattern
    let chords = base_pattern.map(function mapper(root_note) {
        if (Array.isArray(root_note)) {
            // unpack array
            return root_note.flatMap(mapper);
        } else {
            // choose chord type
            let type = Math.random() < 0.5 ? 'power' : 'barre';

            // generate array of all chords matching the type
            let chordtypes = make_chords(root_note, type);
            return chordtypes[Math.floor(Math.random() * chordtypes.length)].chord;
        }
    });
    this.guitar = chords;
}

First the array of root notes is unpacked to check whether an index contains a note number or not. If it contains a note number we have - in this case - a 50% chance of harmonizing the given note with either a power chord or a random barre chord. It is also possible to have multiple root notes, which stacks different chords on top of each other. For more melodic parts there is yet another function:

generate_melody() {
    let selected_chord_set;
    this.guitar.forEach((chord_set, i) => {
        if (chord_set.length > 0) {
            //there is a new chord
            if (Math.random() < 0.5) {
                //use the same chord that is already present --> arpeggiate
                selected_chord_set = chord_set;
            } else {
                //generate a scale from different chords
                selected_chord_set = mergeChords('barre', 'dyad', 'dyad');
            }
        }
    });
}

Here I am iterating through the guitar pattern and checking if there is a chord present. When there is a chord the melody can either arpeggiate through the given chord or a set of new chords - from the same root note - can be merged into a scale. The bass follows a very similar concept as this melody generator.

These are the main concepts behind my generative system. All these methods are designed to be flexible, they can reproduce structures that are very common in Black Metal but they can be scaled in complexity by adjusting parameters and probability to create complex patterns - that still have that Black Metal feeling.

Conclusion

The aim of this documentation was to answer the questions that were important during my artistic process. In the first part I explained the challenges that Generative Music, in my opinion, has to overcome. I showed why JavaScript proves to be an excellent tool for this task and pointed out the benefits of using free technology for artists individually as well as the community. I went on to explain the main concepts behind Black Metal, its aesthetics and how this has influenced me as an artist. I analyzed the genre-defining musical elements of Black Metal, which laid down the foundation of my code, with examples. Inspired by the more recent efforts of the Black Metal avant-garde to push dehumanization and de-individualization to new limits, combined with the concepts of anonymity an the negation of self-staging - that have been a part of the genre since its birth - I strive to push the boundaries even further, seeing a generative approach as one very viable, if not logical continuation. I feel that this approach is a meaningful way of leveraging technology to add musical value to the genre and opens up a new conceptual dimension of Black Metal. Lastly, I showed the most important concepts behind my code to give an understanding of how my generative system works under the hood. The biggest challenge for me was to create a system that implements the various elements that constitute Black Metal, but allows to go beyond an imitation of the genre. For me this work is like a curation of the genre and its history, that starts at the very beginning and reaches into the future of what Black Metal might become.

Footnotes

  1. Bainter, Alex, The Future of Generative Music, Why the world is finally ready for endless music systems, 2. April 2019, https://medium.com/@metalex9/the-future-of-generative-music-e19b6722deb2 (last visited: 9. May 2020) 

  2. Barrett, Brian, Flash. Must. Die., 15. July 2015, https://www.wired.com/2015/07/adobe-flash-player-die/ (last visited: 9. May 2020) 

  3. Smus, Boris, Web Audio API: Advanced Sound for Games and Interactive Apps, California, 2013, S. 1f.  2

  4. Rogers, Chris, Web Audio API, W3C Working Draft, 15. December 2011, https://www.w3.org/TR/2011/WD-webaudio-20111215/ (last visited: 9. May 2020)  2

  5. W3C, Standards FAQ, https://www.w3.org/standards/faq (last visited: 9. May 2020) 

  6. MDN web docs, Web Audio API, https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API (last visited: 9. May 2020) 

  7. Rogers, Chris, Adenot, Paul, Wilson, Chris, Web Audio API, W3C Working Draft, 10. October 2013, https://www.w3.org/TR/2013/WD-webaudio-20131010/#Features (last visited: 9. May 2020) 

  8. Rogers, Chris, Adenot, Paul, Toy, Raymond, Web Audio API, W3C Candidate Recommendation, 18. September 2018, https://www.w3.org/TR/2018/CR-webaudio-20180918/#Features (last visited: 9. May 2020) 

  9. Cassel, David, Brendan Eich on Creating JavaScript in 10 Days, and What He’d Do Differently Today, in: THENEWSTACK, 26. August 2018, https://thenewstack.io/brendan-eich-on-creating-JavaScript-in-10-days-and-what-hed-do-differently-today/ (last visited: 9. May 2020) 

  10. stackoverflow, Developer Survey Results, 2019, https://insights.stackoverflow.com/survey/2019#technology (last visited: 9. May 2020) 

  11. Peyrott, Sebastian, A Brief History of JavaScript, 16. January 2017, https://auth0.com/blog/a-brief-history-of-javascript/ (last visited: 9. May 2020), Punchatz, Charles, How JavaScript Became the Dominant Language of the Web, 7. August 2017, https://www.lform.com/blog/post/how-JavaScript-became-the-dominant-language-of-the-web (last visited: 9. May 2020) 

  12. Free Software Foundation, About, What Is Free Software? https://www.fsf.org/about/what-is-free-software 

  13. GNU Operating System, What Is Free Software?, https://www.gnu.org/philosophy/free-sw.en.html (last visited: 9. May 2020) 

  14. Github, Tone.js commit history, https://github.com/Tonejs/Tone.js/commit/d1738664b8d6f920f6da779f88646d4017d610d2 (last visited: 9. May 2020) 

  15. rop, Sola fide: Deathspell Omega und die Reformation des Black Metal, in: Alpkvlt, 09. March 2018, https://alpkvlt.ch/sola-fide-deathspell-omega-und-die-reformation-des-black-metal/ (last visited: 9. May 2020)  2

  16. Interview with Deathspell Omega from AJNA Offensive, http://ezxhaton.kccricket.net/interview.html (last visited: 9. May 2020)