The Particular Finest

Presented by aurynn shaw

The (Deep) Time Of Your Life

So one of the things that I keep getting asked about is when I refer to my past and my future as different people. I speak of the intents of my past, the situations she’s left me in and the struggles I face because of her.

I do this because past-aurynn is a real force in my life, a person who had hopes and goals and dreams but with whom I cannot hold a conversation. I can’t talk to her, share my triumphs or my failures, can’t ask her if it’s everything we hoped it would be.

She can’t ever know if we succeeded, all she can do is hope that things got better.

I speak of the aspects of my future, all the things I don’t and can’t know, only that my choices affect her choices and set the road upon which she treads.

My actions, or lack thereof, change her life. I am her past-aurynn, and I cannot know what our successes and triumphs will be. She will never be able to talk to me. All I can do is do, and hope for the best.

Communication is Key

One of the things that this worldview strongly reminds me of is the idea of four-dimensional teams. My past self is trying to have a conversation with me, sharing her ideals and desires with me through the actions and artefacts that are left behind. Within the bounds of a fallible memory I can remember her triumphs and joys, share in her sorrows and disappointments, and burn with the shame of those past decisions.

By the same token, I’m trying to have a conversation with my future self. I’m trying to communicate what I think we should do, lay out a future in which she’ll be happy, or content, or able to reach more of our goals.

But conversation isn’t the right word, not that there is even a right word. I can’t converse with the future, but I can leave artefacts and ideas, open some doors and close others and try to communicate my intent.

Try to make it obvious what I wanted to have happen.

Try, Try, Try Again

Try is the key word, and calls back to why I talk about deep” time. Deep Time was an idea put forward about how to perform intentional communication with our far future selves, people who wouldn’t speak our language or have our symbols, would be outside of anything resembling our context.

In the context of the book, trying to describe how to communicate that we’ve left radioactive waste in certain locations, and that digging it up would be, well, perhaps not ideal.

Within my own life I often face the same ideas. I can only partially rely on my own memory, and even that is often suspect and illusory. Who I am and what I want shifts as the years pass by, must shift as I grow and learn and discover nuance, that I have lost interest, or that what I thought was truly important didn’t work the way I expected it to.

The only way I can reliably understand what I wanted from the past is to ask the present and the surroundings I find myself in, find the artefacts that brought me to where I am.

I live in New Zealand because a very young girl saw Jurassic Park and wanted to be one of the people who did that, who made amazing dinosaurs happen on screen, got a job in visual effects, and got to move across the world. It was a memory, a feeling, a desire, a path that past-aurynn had put the present on, and I found myself walking those steps across the world.

It wasn’t what I’d dreamed it would be.

Not bad, not by a long shot, but not what I dreamed it would be. Dreams are by their very nature unrealistic, and holding that kind of dream, anything would have a hard time living up to it.

Access Control

This was the first brush of needing to learn a new skill, a new way of thinking about how the past and the present and what I should or shouldn’t do. I’d had a goal that the past had set for me, an aspiration that I should be doing this. That I had an obligation to that child to keep on with her dreams without introspection because, once, I had wanted that.

But I changed. I had to change as I grew and learned and who I was shifted. But what I hadn’t learned was that I had to give myself permission to not do what I thought I wanted to do, to try new things, to be someone other than what I had once wanted.

I had to learn that maybe that wasn’t what I wanted anymore.

Most critically, I had to learn that that wasn’t a bad thing. Past-aurynn is a suggestion, not a requirement. An option, not the only option.

Her choices and desires may no longer be mine. I get to choose.

Fail

Learning this enabled me to learn that it was okay to lose interest in things I loved, okay to find out that things maybe weren’t as much fun as I thought they would be.

It was okay to fail. Failure no longer meant that I was unable to fulfil the goals that I’d set for myself, that I was a disappointment to my past self, unworthy of my dreams, goals and aspirations. Instead I was able to let go and enjoy setting up a world for my future where she could experiment more, explore more, and try more things.

A world where I could start to see the culture of achievement, passion” and merit” in tech was toxic, because it gave no space to admit that your past choices may no longer be correct. If we question, it means we weren’t really passionate about tech or programming. Real programmers just knew and kept on knowing. They never questioned, never had to question.

In Contempt Culture, not knowing something means you don’t belong, you aren’t part of the group.

Tomorrow, Tomorrow

For right now, photography is one of the important factors in my life. It is an art form I adore, a unique worldview that could only be mine, and an entire wealth of interesting kit to drool over. I have tens of thousands of photos as a result, three published books and I expect to take tens of thousands more photos.

But I also may not. One day, I’ll be past-aurynn, I’ll be a suggestion and a series of choices that I made to focus on photography and writing this blog. It’s time that future-me won’t have again, but as far as she must be concerned all of this, everything I have done?

It’s a suggestion. An option.

Tomorrow, future me becomes present me, and she gets the choice I get, to continue or not, to change or not, to decide something new.

She doesn’t know, and neither do I, if the choices we made before were the right choices. We’ll never know. The passion may fade, and that?

That’ll be okay.

Ceci n'est pas une pipeline problem

This post was originally a tweetstorm! You can find the original tweetstorm right here.

Okay let’s dig into this, which is a blog post where he’s talking about the fallout around the recent, ragingly sexist 31c0n (which I’m not linking to from here).

Remember, this is the con that thought it was OK to tweet (from the official account) glorifying getting wasted”.

So, to start with we have pretty classic tone policing, while grudgingly admitting that we have a point that, yes, it was sexist.

There were no women speakers because the organisers didn’t ask any women, because they didn’t care enough to find any. Getting women requires reaching outside our networks, out of our comfort zones. It requires doing more.

Second, the ticket prices (NZD$750+GST!) is quite an exclusionary measure.

Women are, in general, less likely to get time off or the financial support from their employer to go to conferences, compared to men.

Next, the They’re my mates, of course they’re not sexist!” argument. This is a subset of the he’d never do that!” fallacy. Deliberate sexism isn’t just I hate women”, but is also actions where our skills are ignored, where reaching further to find us is disregarded as unimportant or unnecessary.

It’s also part of the false meritocracy, where best speaker” is only has knowledge like I have”, a background like the selection committee and behaviour like theirs, time to explore like they had.

Time that women are less likely to have.

So they’re his mates, but they performed that sexism, and deliberate? They chose not to look beyond themselves, their shallow and mediocre networks. They CHOSE.

They chose not to look further or try harder.

Next up, he admits that he wasn’t able to place more women in technical roles, but he doesn’t talk about how women are less likely to apply, or about how they’re far less likely to pass the sexist best candidate” filters of the false meritocracy, or less likely to apply, when studies continue to show that women are punished for reaching beyond what they can prove, and men are rewarded for their potential.”

Finally, we get to the shallow analysis that all people confronted with their industry sexism for the first time reach to: the pipeline.

BUT THE PIPELINE has been so thoroughly debunked at this stage that it’s ridiculous it keeps coming up. Yes, the pipeline is important, but pipeline” arguments perform ignorance that the culture they lead to is horrifying and unwelcoming. It’s sexist, it’s othering, and it doesn’t care.

It shows us a culture in which we’re not welcome by throwing conferences after Kiwicon’s amazing work that barely have women attend, let alone speak.

It’s a culture that shows us we’re not welcome through recruiters who think it’s our fault for not applying, for not being more visible, for not being more like the men we’re compared to.

A culture ignores that women and minorities face a terrible rate of attrition because of this culture, that every day we’re told we’re not good enough.

A culture where we’re not welcome, because we weren’t better role models, helping fill the pipeline more.

It’s not the pipeline. It’s never the pipeline. It’s the culture at the end of the pipeline, the culture that generates articles like this. A culture where his mates aren’t to blame for their sexism, a culture where not having any women is acceptable.

It’s a culture where even brilliant, shining examples like Kiwicon, who learned and tried and did it better, are wilfully ignored.

It’s a culture where I am less welcome.

Impostor Syndrome

Impostor syndrome is rife, endemic even, in the tech industry, leaving so many of us feeling like we don’t belong, like we’re just one minor screwup or lack of knowledge from being outed as the frauds that we feel we are.

At the end of February, 2017, a Twitter thread started happening, where people in the tech community, prominent individuals who are widely seen as experts speaking up on all the things they can’t do, that they have to regularly look up, ask about, or otherwise make them feel like they’re not a real engineer.

It was, in many ways, heartwarming. So many people have spoken out about how vulnerability around our knowledge or abilities make our spaces safer for learning and growth, where not knowing is acceptable.

I was glad to contribute as well as well, lending my voice that I don’t know everything and that things are hard.

But, I also had to bring up a very important point that doing so, as a woman in tech and a visible minority is risky for me.

Reality Checks

This risk is well-known by most women & minorities in STEM fields, and was so beautifully captured by this xkcd comic. We are held as torch-bearers, representatives of the classes to which we belong.

It is never that aurynn is bad at regexes but that women are bad at regexes, and I am the latest in a long line of stand-ins, each time a reinforcement that not having a skill in one area means a lack of skill in all areas.

Speaking out that I am anything less than the perfect epitome of a software developer means that I am giving people like that ammunition, reinforcing their thinking that I am not a real engineer, but a fake and impostor, and that I could never be like him.

A real engineer.

The False Meritocracy

This behaviour of gatekeeping women and minorities is a natural effect of contempt culture, as contempt culture itself is a performance of gatekeeping. We reject knowledge that is outside of the insular groups we belong to, because to acknowledge that such knowledge exists is to bring into question our right to status within the group.

Our social capital was built on the idea that we know the right things, and anyone who doesn’t, doesn’t belong to our group, and they’re not really a part of who we are. They didn’t even know that, for instance, Windows is bad, or PHP is bad, or that no one should use AOL.

One of the things I didn’t talk about with Contempt Culture is that we belong to multiple groups simultaneously. We belong to our programming community, where we learn that PHP is bad and that users of it should be mocked, but we also belong to the class of people who are like us.

We can see how we belong to this larger group when we do tech interviews, and base our decisions on culture fit.” When we interview and talk to people like us, we are more sympathetic to them. We see ourselves in them, and the similarity of our histories mean that we have so much common ground, that we don’t have to work to find. It’s just there!

Middle class upbringing? Got bullied at school? No one understood you?

We walk down a list of checkmarks and find that they are like us, and we find ourselves understanding how they think, how they solve problems, how smart they are because they’re like us.

It’s not the same when it’s people who we don’t understand without trying. People from a PHP background, we don’t understand how they could use that tool, we don’t have a frame of reference for their experiences and knowledge. We can’t judge them, so we fall back to our cultural knowledge that PHP is bad and so too must they be.

Our culture keeps the gates, judges those who are different more harshly, scrutinises them further to ensure they’re real, code for nothing more than already like us.

Performing Ignorance

Recently I talked about  how one of the side-effects of contempt culture is the reinforcement of displaying and performing ignorance, through a refusal to examine or challenge what we think and why.

Gatekeeping is an act of performative ignorance. We are performing ignorance, reinforcing that we shouldn’t have to look outside of ourselves for knowledge or answers, because we know enough already.

We perform ignorance that their skills are worthy, because we already know that their skills are not, because they are different and we have no frame of reference.

We perform ignorance in that we will not work to understand their frame of reference.

We perform ignorance when we generalise them and fall back on the stereotypes that our cultural ideals we’ve been taught. The last time we saw a woman in tech, they failed at this thing, so I’ll have to test her. Oh, and she failed at this other thing too.

Women must be bad at tech, they’re clearly not real.

Forward Projection

Worse than biases being applied to us by those performing ignorance is how the biases against us linger, a stain that will not wash away.

I’ve heard so many stories of how not knowing something early in a career means that they are now known to not know it, or where having caused an outage means they are now known to be unreliable.

The demand of perfection starts at day 1, and never ceases.

Fear and Loathing

Contempt culture forces those of us already here to stay quiet, to risk our permission to belong when being in tech is only one axis of our belonging, and our lack of belonging on the other axis makes any slip-up damning.

We live in fear of liking the wrong tool or admitting to the wrong person that we don’t know, that at any moment the fragile thread of our sense of belonging and participation will be severed, ripped apart by the people we thought we’d appeased.

The forward projection of biases means that any of these minor failures will be a part of the history others carry of us, a part of how as torch-bearers we have Let Down all of those like us, and we lose permission to improve, or grow, learn, or do anything but be quietly excellent from the start.

I could cause an outage and it would not be aurynn who is bad at operations, but women, all because I am different, because there is no frame of reference. But for someone with culture fit, with shared histories, with stories that match, well, I know what he was thinking.

I see how he could have screwed up.

It’s not his fault.

It was just this once.

We’ll just

let it




slide.

More Fun with Terraform Templates

So you may have noticed last time that I said I’m trying to create complex JSON objects from within Terraform.

In this case, I really want to be able to create an AWS ECS container definition, which looks a bit like this (copied from the AWS docs, here)

 {
   "name": "wordpress",
   "links": [
     "mysql"
   ],
   "image": "wordpress",
   "essential": true,
   "portMappings": [
     {
       "containerPort": 80,
       "hostPort": 80
     }
   ],
   "memory": 500,
   "cpu": 10
 }

The important part for me here is making a module to create these JSON blocks. This will let me keep all the variables in Terraform variable files, and ensures that I can interrogate the state file as to what variables are set, and for what container definition.

Ideally, I want the declaration to look something like this

module "container_definition" {
  source = "./ecs_container_defintion"
  
  name = "container"
  image = "hello-world"
  essential = "true"
  memory = 500
  cpu = 10
  port_mappings = [
    {
      "containerPort": 80,
      "hostPort": 80
    }
  ]
}

So, we only have three basic pieces of data to worry about here:

  1. Simple key-value associations
  2. Array of strings
  3. Arrays of maps

As we looked at last time, the jsonencode function can’t deal with an array of maps (or any complex datatype), so we have to unpack this manually.

But, we also can’t use a jsonencode for the basic data pieces either, because making a map that we then encode means we’d end up with a JSON string that we couldn’t expand with the complex data types we need to create.

So that won’t work.

What will work, however, is the bit we used last time, specifically the join(",\n", var.list) that we used. However, instead of using a variable directly, we can instead create the list on the fly using the list() function from Terraform.

Layer 1

That’s set the scene on what we want and how we’ll get it to work. Let’s dig into what it’d look like.

I’m going to skip the variable declarations this time around, and focus on just the data declarations and the resulting JSON blocks.

To start, let’s just have a basic module call, like this

module "container_definition" {
  source = "./ecs_container_definition"
  name = "container"
  image = "hello-world"
  essential = true
}

Three things, completely straightforward. Should be easy.

So, going with our dynamic list and join, what will the template look like? Probably something like this

data "template_file" "_final" {
  template = <<JSON
  {
    $${val}
  }
JSON
  vars {
    val = "${join(",\n    ",
        list(
          "${jsonencode("name")}: ${jsonencode(var.name)}",
          "${jsonencode("image")}: ${jsonencode(var.image)}",
          "${jsonencode("essential")}: ${var.essential ? true : false }",
          )
      )}"
  }
}

So here’s where it’s starting to get a bit, well, not great. As you’ve noticed, each key in the list has to be run through jsonencode, to ensure that it’s properly quoted. The values have to be wrapped in quotes as well, so they’re encoded.

Because we don’t want a list of single-key JSON object strings, we can’t just encode as a list of maps.

var.essential is interesting, as well. Passing a boolean value like true above gets converted to 1 by the module process, so here we just cast it back.

This will probably fail miserably if you pass in the "false" string, instead of the false boolean.

Finally, when we render it, we get

{
  "name": "container",
  "image": "hello-world",
  "essential": true
}

Which looks perfect! Exactly what we want.

Next, let’s add the links array. This one should be easy, because it’s just a list of strings, and we can rely entirely on jsonencode.

"${jsonencode("links")}: ${jsonencode(var.links)}"

Easy. And the output is right, too

"links": ["mysql"]

Port of Call

Next, we’ll add in the port mapping section, the complex part, the array of maps, which is the first point where we need to break out a second template_file to handle the rendering. This is specifically so we can use the count construct to iteratively jsonencode the elements of the list, and then wrap the entire contents in [].

This is going to look something like

data "template_file" "_port_mapping" {
  count = "${length(var.port_mappings)}"
  template = "$${val}"
  vars {
    val = "${jsonencode(var.port_mappings[count.index])}"
  }
}

We can be pretty dense here, as all we’re trying to ensure is that each element of our array has been rendered by jsonencode, and doesn’t require additionally complex actions.

Adding it to our list would be

"${jsonencode("portMappings")}: [
     ${join(",\n", data.template_file._port_mapping.*.rendered)}
]"

which gives us the output we’re looking for:

"portMappings": [
   {"containerPort":"80","hostPort":"80"}
]

Great!

Okay, so, what if we leave things off? image and name aren’t optional, so we just don’t provide a default and let the Terraform compiler handle that case. essential isn’t an essential field, I think we should be able to drop that successfully. Let’s do that.

Hm

Errors:

  * __builtin_StringToBool: strconv.ParseBool: parsing "": invalid syntax in:

Well,

That’s not good. That’ll be the var.essential ? : section, where we try to cast an int into a boolean.

So we’ll need to detect if we’re passing in the default empty string, and do something useful based on that.

But that’s easy! We’ll just another ternary to test it! Something like this

"${ var.essential != "" ? "${jsonencode("essential")}: ${var.essential ? true : false }" : "something" }",

and then we go again, and

Errors:

  * __builtin_StringToBool: strconv.ParseBool: parsing "": invalid syntax in:

oh

it’s evaluating it… twice…

Hrm.

Okay. We can solve this. How about

...
"${var.essential != "" ? data.template_file.essential.rendered : ""}",
...

data "template_file" "essential" {
  template = "$\${jsonencode("essential")}: $${val ? true : false}"
  vars {
    val = "${var.essential != "" ? var.essential : "false"}"
  }
}

Eesh. That’s not great. It works, but, yeah, not very well. The first evaluation always takes place, even if the other branch in the comparison is taken. This means that, no matter what, I have to create the essential template node, even if essential is undefined, to pull off this effect.

You may be asking why is she even trying to cast things to true or false? JSON says it’ll just work.”

And the answer is because Terraform tries to be clever, and turns the JSON blob into a struct. Which is strictly typed to expect a bool.

Which means it complains loudly at anything that’s not a literal true.

Fortunately, CPU and Memory should be easy, we can just test if they’re defined inline easily, such as

"${var.cpu != "" ? "${jsonencode("cpu")}: ${var.cpu}" : "" }",

Collapse the List

Rendering out an empty string, "", does have one negative side effect, in that we end up with a JSON block that’s invalid. However, there was a reason I picked the empty string as my return value, and that’s the compact() function in Terraform.

compact() takes an array, and strips out all the items that are empty, so changing cpu above, for example, means the entire

"cpu": 10

line just won’t render if cpu isn’t defined, which is perfect for our dynamic” goal.

Back to Port

Okay, so, back to port mappings.

Unfortunately, due to the complexity of the operation, we need to do the same thing we did with the essential entry to ensure that it is a bool, and break it into its own template_file, like this

data "template_file" "_port_mappings" {
  template = <<JSON
  "portMappings": $${val}
JSON
  vars {
    val = "${join(",\n", data.template_file._port_mapping.*.rendered)}"
  }
}

and address it in the final render like so

"${length(var.port_mappings) > 0 ?  data.template_file._port_mappings.rendered : ""}"

And, lo and behold, it renders correctly.

This has bad idea” written all over it

Of course, the proof is in the pudding: Will AWS accept this as a valid task definition?

After removing links, our final rendered block is

  {
    "name": "container",
    "image": "hello-world",
    "cpu": 10,
    "memory": 500,
    "essential": true,
    "portMappings": [
    {"containerPort":"80","hostPort":"80"}
     ]
  }

which looks agreeably correct, to me, but it’s not me that must be agreeable, but Terraform and AWS.

And the answer is

no.

At some point in this, our map had the port values changed from integers into strings, and Terraform doesn’t cast from strings when deserialising the JSON blob.

sigh fine.

So, each portMapping has three elements: a hostPort, a containerPort, and an optional protocol, where protocol can be either tcp” or udp”.

Because it’s two different kinds of things, we’re going to need another compacting list render step.

OKAYFINE.

OKAYFINE

So, we need more control over the port mapping render. We already broke it out into its own template_file block, so let’s start there.

Because of how we’re using the count.index to create a terraform node array, we’ll have to check for enumeration here. We can’t use the same trick we’re using in the main renderer, where we collapse a list using join, at least not in the same way.

Actually

Maybe we can.

Because port_mappings is an array of maps, we can use element() to pull a variable out of the map during our iteration, and use the default to return a “” in the case that it’s not present.

Which we can then use as our list elements

Which we can then collapse into just the elements that exist

which we can turn into our rendered dict! Like this!

data "template_file" "_port_mapping" {
  count = "${length(var.port_mappings)}"
  template = <<JSON
$\${join(",\n", 
  compact(
    list(
    hostPort == "" ? "" : "$\${ jsonencode("hostPort") }: $${host_port}",
    "$\${jsonencode("containerPort")}: $${container_port}",
    protocol == "" ? "" : "$\${ jsonencode("protocol") }: $\${jsonencode(protocol)}"
    )
  )
)}
JSON
  vars {
    host_port = "${ lookup(var.port_mappings[count.index], "hostPort", "") }"
    # So that TF will throw an error - this is a required field
    container_port = "${ lookup(var.port_mappings[count.index], "containerPort") }"
    protocol = "${ lookup(var.port_mappings[count.index], "protocol", "") }"
  }
}

Isn’t it beautiful.

EDIT

I’ve split the $\$ segments above with a \ to prevent a rendering error. Remove the \ to make it work. 😄

Unto the Breach

Okay, our final rendered output looks like

  {
    "name": "container",
    "image": "hello-world",
    "cpu": 10,
    "memory": 500,
    "essential": true,
    "portMappings": [
  {
"hostPort": 80,
"containerPort": 80
}

]

  }

We’re getting a decent amount of spurious whitespace at this point, but we’re just going for a proof of concept. We can clean that up later.

Again, the proof is in the pudding. Will Terraform take this snippet?

YES!

Next Steps

yes

At this point, this module is a very barebones implementation, and doesn’t support the majority of options that the ECS task definition supports, but now that we have a reasonably complete implementation for the basics it should be relatively straightforward to fill out the rest of available options.

Just Because You Can

This is probably an excellent example for that old axiom of, just because you can’t doesn’t mean you should. At the same time, it provides a considerably cleaner interface to the user to work with a container definition, which is an important win.

More than anything, I think it’s amazing to see how you can bend a tool that clearly isn’t designed to generate dynamic JSON into generating dynamic JSON.

Fun with Terraform Template Rendering

One of the things I do as part of Eiara is write a lot of Terraform, an infrastructure definition language, to provide a sensible baseline cloud instantiation of infrastructure and resources.

I’m quite fond of Terraform as a tool, even though it still has a decent number of weirdnesses and edge cases. If you haven’t seen it you should look at Charity’s Blog about Terraform for a great rundown on those issues. It’s quite powerful, and really works well with how I think to enable me to express exactly what I’m thinking.

Part of what I’m doing requires rendering out JSON templates for use with AWS. This is a pretty normal requirement for doing anything with AWS, from the IAM policies to the ECS task definitions. Lots and lots and lots of JSON.

Lots. (lots)

Specifically, right now I’m trying to do is make a list of dicts, where each dict is the representation of a single module, which then gets jammed together to be inserted as a list in a larger block of JSON.

Straightforward, right?

Well…

Gotchas as a Service

First off, the best way to see what’s going on is to write out what’s going on to disk and look at it. Terraform doesn’t directly let you do that, instead requiring an approximation, with something like:

resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${data.template_file.test.rendered}\nEOL"
  }
}

Note the \ns in order to make sure the multiline expands properly. Of course, nothing could go wrong with a rogue template here, but I digress.

But, we can write content out to disk, and check that our JSON blobs are working as expected. Great!

That’s Interesting

Next up, template files. We render them with a straightforward block,

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    variable = "${var.myvar}"
  }
}

and everything is fine.

Interestingly, files that are rendered by the Terraform template system have access to the full range of functions provided by the Terraform interpolation engine. This means that you can use the file() function from inside a template file.

That’s curious. I’m sure nothing bad could happen there.

Complex Data Types and Templates

Back to trying to render my JSON. The first thing I tried was just to plug a list in, and try to render it inside the template, much like this. variable alist” { type = list” default = [1,2] }

data "template_file" "test" {
  template = "${file("./tpl.json")}"
  vars {
    alist = "${var.list}"
  }
}

Unfortunately, Terraform doesn’t, as of 0.8.7, let you pass complex types into the template renderer, so that doesn’t work.

However, if we use join(",", var.alist), it’ll render much as we expect it to for numbers.

{
  "list":[1,2]
}

What about if we use strings?

variable "alist" {
  type = "list"
  default = ["a","b"]
}

Output:

{
  "list":[a,b]
}

Well, that breaks. But! We have the jsonencode() function, which returns blocks of JSON. Great! We can render our list arbitrarily!

List of Strings of Rendered JSON

But the goal here is to drop a list of rendered blobs of JSON into our template. How does that hold up with jsonencode ?

variable "alist" {
  type = "list"
  default = [<<EOF
{"foo": "bar"}
EOF
,"b"]
}

Output:

{
  "list":["{\"foo\": \"bar\"}\n","b"]
}

Hm. That’s not good, but, surprisingly, we can use HEREDOCs inside a list declaration.

Neat. But, I digress.

What about a nested map? Will that work?

variable "amap" {
  default = {
    foo = {
      baz = "bar"
      beez = ["a"]
    }
  }
}

Output:

Errors:

  * jsonencode: map values must be strings in:

${jsonencode(var.amap)}

Hrm, not directly.

And we can’t pass complex types into templates.

We could use string literals, but then we’re not able to pass in an arbitrary number of elements through our list. Also, since we’re expecting to return rendered bits of JSON from our modules, this is just going to wrap things in strings, which isn’t what we want anyway.

Okay, What About

So if all I want to do is make a list of dicts, I should be able to render to JSON the dict initially, and then just join the properly rendered JSON blobs with a comma.

Let’s test that.

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    alist = "${jsonencode(element(var.alist, count.index))}"
  }
}
resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${join(",\n", data.template_file.test.*.rendered)}\nEOL"
  }
}

Output:

{
  "list":"a"
},
{
  "list":"b"
}

Okay that’s really close! We’re rendering into templates and then just joining it together with a ,\n and it appears to be what we want.

So in order to get it to look right, we’ll need to wrap it in an additional template_file to add the [] pair that we need to have a proper list of dicts, such as

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    alist = "${jsonencode(element(var.alist, count.index))}"
  }
}

data "template_file" "test_wrapper" {
  template = <<JSON
[
  $${list_of_dicts}
]
JSON
  vars {
    list_of_dicts = "${join(",\n", data.template_file.test.*.rendered)}"
  }
}

resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${data.template_file.test_wrapper.rendered}\nEOL"
  }
}

Output:

[
  {
  "list":"a"
},
{
  "list":"b"
}
]

That’s really close! The indentation is a bit off, but Python can read it!

Complexity

This is obviously a bit of a weird, complex case. By trying to hide the abstractions of JSON blocks that represent, in my case, ECS Container Definitions, I’m requiring other places in my code to craft the correct JSON blobs.

But it also feels like good programming practise to build abstractions on top of things like container definitions and provide a cleaner interface to the components I’m working with. And because I’m building two abstractions, one for the container definition and one for the task definition, I can hide the nesting of templates and, in the end, pass a list of module outputs to a module and have it do The Right Thing.

And that feels right.

Trusting Trust

The tech industry is an interesting place. On the one hand, we have the old guard, the 80s-driven hacker culture” that drips with contempt culture and ivory tower silo mentalities.

On the other hand, we’re seeing a revolution in our culture in the form of DevOps. a DevOps is interesting in that it looks like it’s about tools, but in a complete head fake we see that nothing about DevOps is about our tools, but instead everything about how we work and how our teams function, both internally and externally.

The mechanisms of contempt culture drive us towards silos, islands of cultural isolation where our tools and ways are the best”, and anyone outside is lesser. Less smart, less capable, less worthwhile as a human being. These silos have been the dominant force in the tech industry almost since its inception, from the original MIT lusers” insult through the self-fulfilling prophecy of PHP is bad”.

DevOps, by contrast, is a cultural push to dismantle the silos between developers and operations. We recognise these silos are harmful or reductive, and place us in combative situations where we are unable to achieve our goals. We’re learning to trust each other, even if we don’t fully. We still expect the mechanisms of contempt to rear their head, for people to treat us badly for our ignorance or failures.

But these silos are more than just the silos between ourselves, they are the silos that isolate IT from the rest of the company, and isolate our communities from the broader world around us.

We don’t trust our users, the people that our jobs exist to serve. We still treat them as an isolated silo, as lesser, as knowing less than we do and consequently being unable to contribute.

They don’t code, after all. Their contributions can’t possibly matter.

More importantly, more harmfully, people outside our silos don’t trust us.

The Untrusted Many

We all have stories about the users we hold in contempt. The people who click links in their email (as though we don’t). The people who don’t take care of their systems the way we think they ought.

The people who choose to use devices that just work, and our contempt flows from us that they would dare use tools that enable their goals without our permission and gatekeeping.

I use gatekeeping” intentionally, for that is what we do. We blame the victims of our failures, accusing the user of clicking the wrong link, not running the right antivirus, of buying the wrong device. We make them ashamed of asking for help, insisting that they should have known better and that they are not worth our time, and we demand obsequience before we will lower ourselves to helping them.

And we say we don’t do this.

The broader public doesn’t trust us because we are cruel, isolationist and harmful not just to them but to each other. Through language that demonstrates our contempt for outsiders, such as RTFM, PEBKAC, and ID10T errors, we assert that our skills are the only ones that matter. Our central tools, such as GitHub, spent years treating code as the contribution. We treat the clients that pay for our time with contempt because they don’t know how to ask, or express what their goals are.

We may claim otherwise, but culture is not what we say, it is the actions that we permit.

We permit the Linux kernel community to be a toxic wasteland, a technology which is a vital driving force of modern computing. We say we do not value contempt and hostility, but our actions show otherwise.

Outside of our community, we are untrusted. We are the risk, we are the toxic person to many people who know us. We may never have performed contempt culture, never have shamed our family, friends or colleagues, but we benefit from contempt culture. We benefit from the idea that all IT workers are hostile, that the technical are wizards” and that they should come to us in supplication.

That when they come to us they should beg, and make us feel superior before asking for our help.

I’ve seen this countless times. People become deferential to me when I say I work in technology, because they are so used to people in tech treating them badly, of being made to feel bad for their ignorance.

Empathy Systems

DevOps is starting to change this world, for the better. As a head fake, we are offered new and shiny tools to help us do things better, tools that fit well into the Agile workflows that dominate our industry. We discuss how our software fits into the broader ecosystem of a reliable deployment, how we can better deliver the goal of working software”.

As a dev, there’s nothing more satisfying than delivering software, of making something work and delighting those who have asked my help.

But what we’re learning with DevOps isn’t tools, it’s empathy. It’s teaching us what silos look like, and as a dev, teaching me that ops people don’t trust me, they see me as a risk, a threat, a source of anxiety.

DevOps is teaching our culture to see that our isolationist attitudes prevent doing great work.

This must not be the only step, but the first step. We are discovering empathy and with it how to look at other points of view, other needs and concerns, and delivering better software” as a result.

But our definition of better remains flawed, incomplete and hostile, because our culture still isolates us from our users of our software. Better remains better for us, for the elite, where we deliver our features faster but still demand obsequience and reject an understanding of what our users need, of what their goals are.

Improvement Plans

It doesn’t have to be this way. DevOps is showing us how to respect other needs, other goals and priorities. It is teaching us a culture of respect and inclusion, but only for those who already share our elitism.

We can go further. Entire fields of study exist to understand user needs, to improve their experiences and deliver tools that aren’t just stable, or easy to deploy but delight and inspire, to make people feel great and powerful on their own, without the false ideal that they must be able to code.

We must choose to admit that our culture is exclusionary, that the system of it performs exclusion, and that we think this is ok, and we must choose to explicitly act to include those who have been excluded. The design experts, the writers, the user experience experts. The people whose skills make a project live, the people we look down upon. The people who aren’t like us, who show us different needs that we keep overlooking.

The users, the very people we make these tools to serve.

And just maybe, we can rebuild the trust that has so long been shattered, build communities that respect all needs, and work to serve more people.

Human Driven Development at LCA

Hey!

So, in case you missed it last time, I was honoured to be invited to keynote WOOTConf at Linux.conf.au on Monday, and was able to give an updated version of my talk Human Driven Development”

You should go watch it! It’s great!

Secondary Effects

So, we all know about contempt culture nowadays, the effect that we’re building status on top of displays of contempt and showing that we’re holders of the right” knowledge.

But this has some unfortunate side effects that result, that I’ve been starting to notice.

Contempt culture doesn’t just encourage us to shame, dismiss and behave contemptuously towards outsiders, it also encourages us to shame and dismiss each other within the community.

The Things You Didn’t Know

You may have seen this effect with You didn’t know that?!” style responses to people’s ignorance or questions. This style of response is suppressive, and encourages people who don’t know what’s being talked about to stay quiet, to not ask, to withdraw. It’s a shaming act, a demonstration that you don’t belong because you didn’t know.

But, when this is done, we’re binding the status of belonging to the group to how few questions we ask. Asking is an opportunity for mockery, for responding in (possibly mock) shock regarding your ignorance.

How could you be here without knowing that, after all?

Because belonging is part of how few questions we ask and positioning ourselves as knowing things even if we don’t, newcomers to our communities don’t see a community of people who seek knowledge and admit their ignorance.

Instead, they see a community of people who profess expertise and, through contempt culture, are positioning their biases and contempts as the result of that expertise.

Because newcomers don’t see people asking questions, and are shamed for asking questions by those questions being seen as a challenge to their right to belong, they are trained to behave in the same way.

Impostor Syndrome

This is one of the causes of impostor syndrome. People caught in this coupling of ignorance as status will never feel, internally, like they belong. They will always be caught in needing to show that they know everything but afraid of asking, of being caught out as a fraud, of the challenging, harm-filled You didn’t know that?!”

This is the feeling of being an unwelcome impostor.

This makes our communities quite hostile and difficult to participate in, because we spend our time afraid and uncertain instead of open and participatory. We perform contempt culture, reinforce that we all hold the right knowledge, and questions that imply we don’t know things are shut down as fast as possible.

Solution Strategies

Solving this isn’t an easy process.

As a community, it requires calling out behaviour where people are behaving as though you must have known something.

It requires luminaries in the communities always being vulnerable, and continually admitting that they don’t know things, don’t know why. It requires that everyone dismantle contempt culture by asking why?”. Why did they use that technology? Why did they make those choices? What are the surrounding requirements that informed these decisions? There are always reasons why things are done the way they are, and until we understand it we cannot usefully comment on the why.

It requires tools like a Code of Conduct, such that participants have can request help when they are excluded in non-public spaces, tools which describe the standards of behaviour and insist upon their adherence.

More than anything else, it requires caring about being explicitly welcoming to everyone, and interrogating your culture and truly, painfully asking why it isn’t.

These questions are hard, but they’re necessary.

There Is No Meritocracy

Tech culture idolises the idea of the meritocracy, the mythical organisational strategy where those of skill and capability rise to the top, and lead naturally”. Of course, being tech this means that those of great technical knowledge and coding skill are the most meritorious, deserving of our recognition and adulation.

But, meritocracy is just an unacknowledged bias. When you say good at coding”, what you mean is that they have your background, value your values, prioritise like you prioritise. They have your ability to share on GitHub, your spare time to contribute, and make decisions that look like your decisions.

Unacknowledged bias looks like The best devs spend their holidays coding”.

This bias devalues the idea of other perspectives being meritorious. It insists that other views can never be good enough.

How could they? They don’t fit what you know good enough” looks like, so you don’t have to think, or question, or challenge.

You know, so you can remain ignorant, deny that questions need to be answered, let alone exist.

This bias means that people who can’t act like you can never be meritorious. How could they? They’re not spending their holidays coding like those with merit, they have other responsibilities and commitments.

There are no Questions

Thinking of the meritocracy in this way isn’t the normal way of considering it, as when we challenge what the underlying values are, what the implications and ramifications are, we are showing ignorance.

As we know, contempt culture bases itself on displays of contempt and reinforcing pre-existing group knowledge, and according status on adherence to that demonstration.

This has some side effects, like the bitter knife of impostor syndrome. Asking for knowledge or help is the domain of the lesser, those who are not elite. It works such that impostor syndrome is a natural result, as those who are able to answer questions so quickly are looked up to, lauded as the luminaries in the communities. This reinforces that because we don’t know, we aren’t looked up to, that we don’t belong here like they do.

They are the wizards, and we are not.

When we feel like we’re incompetent or don’t belong for asking questions, we don’t (can’t, even!) challenge the ideas within the culture. Asking what the side-effects of the meritocracy are just. isn’t. done.

There are no Answers

This attitude acts to suppress introspection and questions.

We are prevented from asking questions through the fear our culture instills of our own ignorance, through the backlash that arises when our culture is questioned. Instead, questions to the status quo cannot be heard, or even permitted to exist.

This attitude gets reinforced every day by tech culture. Hacker News suppresses social discussions and conversations where perspectives other than the ones we already have can be examined.

This is an ideology of contempt culture, of confidence being status, that we know enough about social issues already, we know enough about the impact of our actions already.

It is an attitude of continually reinforced ignorance that rewards participants for their complicity.

Here is your Reward

We’re rewarded for our complicity with a sense of belonging. As we know, this is how social norms and mores propagate at all, how we teach children what’s acceptable in society, how we tell each other what we should and shouldn’t do.

In tech culture, belonging is coupled to the rejection of introspection and questioning why we do what we do in the way we do it. When we behave this way, people offer us their support either through their silence and their overt support.

We get to belong, to feel the wonderful endorphin rush of being included.

We push it further because if we don’t then we’ll be seen for the fraud we are, as impostor syndrome whispers such believable lies in our ears.

The Glorious Temptation

A lot of why I initially participated in contempt culture was driven by wanting to belong. Like so many, I was bullied in school and didn’t have a supportive home life, resulting in my withdrawal into computers.

I belonged, there. Videogames never questioned whether I got to play too, they just ran, and I got to play. They never made fun of me for my body, or who I was or wasn’t. I was never judged.

Contempt culture was how I belonged to those communities early in my tech career. Showing that I knew the right things, to show that I was the Right Sort of person, not one of those horrible lusers”.

That I hadn’t belonged or fit in for so long meant that now that I finally did it meant so much to me and I was so thrilled that I would have done anything to keep feeling it, to keep it being true.

There Are No Consequences

Asking me to have considered the consequences of my actions, if my behaviour made others feel like they weren’t welcome?

I would have felt like you weren’t just questioning my actions, that you were questioning my very ability to belong, because this is how I showed that I belong.

So, I parroted the lines. I said that you should be coding more, immersed in your work. I refused to consider the consequences of how that would exclude people, because I couldn’t focus beyond my own fear of exclusion.

I could not allow myself to accept the consequences.

It was all I knew. It is all we know.

Meritocracy

I say that the meritocracy doesn’t exist, because it is a parroting of a culture that surrounds us with ideas that tell us that we are permitted to belong, because we fit the right pattern.

We don’t look at the consequences, because looking at them challenges that we deserve to belong at all, challenges our own ability to think of ourselves as good people.

It does not and cannot allow itself to be challenged, because it challenges our own self-worth, our own belonging.

It does not and cannot exist, because we use it to mean like me. Like me, which means nothing except is like me, not capability or intelligence or skill, just that someone does or does not have a history that looks like mine does.

No merit involved.

I experienced this, when I started to question my own ideas around the meritocracy and what being good” meant. I remembered so many things I’d done, things that I cannot ever undo, or even apologise for, that now I am horrified to have done.

Where was the merit in my shouting out into a meetup to get a real language”? I was pushing that those of skill and competence should ignore this person, this company, this technology.

In tech culture, be it on mailing lists, IRC, on everything else, my behaviour back then is the meritocracy of now. It is shouting into the dark that I know better, and that you do not belong,

you never will,

because you lack merit.

How to Make a Book of Photography

So after my recent(-ish)((6 months is recent, right?)) publication of Fly, A Collection, I wanted to talk about the process of making a book of photography.

Fly is my third work, following Linear A and Distinctly Coromandel. All three went through different publication routes, but contain a lot of similarities that are worth discussing.

So, how does one publish books of photography? Well, as much as others might say otherwise, it’s pretty straightforward.

My experience is entirely around self-publishing and developing my own publishing workflows, both by myself and with others.

Step 1: Yes, You’re Good Enough

The major blocker to publishing a work is the belief that you’re not allowed to, that you’re not good enough to do so, or that your work just isn’t worth showing off.

Your brain is lying to you.

Publishing a work is, in a lot of ways, a big deal. It’s not just judging your own work, but putting it out there in such a way that others will also judge it and hold it to their own critical eye. You’ll get feedback, and it won’t always be positive.

More than that, you’re overcoming your own sense of taste. Ira Glass said this brilliantly when discussing the creative process. The work you put together is something you will not be happy with. This is a huge barrier to get over, a hump that will give your brain ample opportunity to tell you that because your work doesn’t live up to your expectations, it won’t live up to other peoples’ either.

Again, your brain is lying to you. Yes, your work is going to disappoint you in some ways, but others will not see that disappointment. Instead, they will see the result of your hard work: the finished piece, complete to the best of your skills at the time, and something that maybe they’re not yet brave enough to make.

They’ll see achievement, not disappointment.

Step 2: Making a Collection

The current internet age has had some interesting side effects with regards to photography. On the one hand, photography is democratised to a point where we have amazing cameras in our pockets all the time. This is amazing, powerful, and a magical world where we can document so much, share so much, and build a collective view of our world in real time. I love it.

On the other hand, because we take so many photos and so many photos get uploaded so frequently our streams are often sequences of beautiful images that lack cohesion or continuity, and it becomes harder to think in terms of a collection of your work, your vision, and your taste.

Here’s some pretty images I took” isn’t a theme. Like your stream, the lack of a contiguous theme will make the final product feel messy, and it will be a lot more difficult to find a point of completion.

Find a theme.

Your theme will something you want to present through your work. Linear A grew out of my interest in minimalist images, capturing the way lines work within photographic frames. Fly recaptured the magic of flight, to reclaim it from the misery of airports and long haul and rejoice in the majesty of our world seen from above. Distinctly captured the beauty of the Coromandel, and the ways that humans have irrevocably changed the land.

Themes aren’t always immediately obvious in your master collection. They may only be visible after you’ve dug through the images for a while, categorising and sorting and deciding what belongs where.

Once you’ve found your theme, you’ll probably discover that you don’t have enough pictures. Fortunately, this is a great excuse to go take more pictures, and may be the impetus you need to get out the door and start shooting stuff again.

This step is really, really hard.

For Fly, this step took 5 months, going from almost 2000 images to the 28 that made it into the book. This won’t be an overnight process, and you will need to often to refer to Step 1, believing that you both are good enough and that your taste is good enough to do this.

Step 3: Finding a Printer

So you have a collection you’re not too unhappy with! Congratulations, you are further ahead than most get. You’ll feel like you need to keep refining it, trying to make it better, improve what you have.

Stop. It’s done.

Yes, you can keep polishing and keep improving, but there must be a point where you let it go and bask in the achievement of creation. Not releasing means you can’t take what you’ve learned and try again, as the current work remains unfinished”.

Not only that, the getting it ready to print” stage will take a lot more effort than you realise.

There’s two major ways of approaching this:

  1. Easy, Online Print on Demand
  2. Dealing with a local printer yourself

Easy, Online Print on Demand

This will be a service like Blurb or Snapfish or a number of others. They offer tools and super easy integrations to make it really easy to make printed photos happen.

Lightroom integrates with Blurb, and they also have their own make an book thing!” app, if you don’t use Lightroom, or want more control. This is great, because you can just drag and drop images into the layouts, push a button, and you’ll get a copy through the post a couple of days later.

It’s pretty magical.

The other ones are equally easy to work with, though lack the close Lightroom integration that Blurb offers.

A Local Printer

Working with a local printer is considerably harder.

You’ll need to do a lot of the pre-press work yourself, handling layout with something like Indesign, and produce a file for the printer to work from. It’s not particularly more onerous, but it’s much less drag-and-drop easy than working with major publisher software toolchains.

Local printers usually also prefer to work at a larger scale than Blurb or others, requiring you to purchase more than a single book. Blurb is happy selling you individual books, and dealing with any fulfilment themselves.

The advantage is you get a lot more control over the print process, from paper selection to ensuring that your prints happen on a particular printer with particular inks.

For Linear A, I used Blurb to handle the printing, and I’m really happy with the quality of the books. For Distinctly, my co-author arranged the print with a print shop local to him. For Fly, I worked with a print shop directly to manage the print process, and we discussed paper weight, size, and other items.

Step 4: Colour Theory and Tears

Either way, now you have A Book! Your thing! You made it! You actually really made it! YOU MADE A THING. Twitter and Facebook that thing. It’s yours.

And then you’ll notice that all the colours are wrong. What. The printer screwed up your amazing work!

Well,

no.

What you’ve just discovered is that your eyes are great big liars, computer screens are liars, and paper is what is this I don’t even.

Welcome to the miserable land of colour theory.

You’re probably already aware of white balance, if only passively. Some light bulbs look warm”, right? And some look cool” or cold”. This is white balance in action, where the colour white” isn’t actually ever white, it’s just perceptually white because your vision system is out to mess with you.

On top of this, what your computer screen is showing you is red isn’t actually red. Or green, or blue. Because our vision system is adaptive, we don’t notice that it’s not real red, it’s just red until we have a comparison, at which point we can see how red it isn’t.

On top of that, what colours a printer can represent are different from what colours a screen can represent. You’ll start to hear terms like gamut” and colour space”, describing what you can get onto the paper at all. Different printers and inks will have different capabilities, too!

Intense shades tend to get lost, being clipped back to dimmer, less saturated versions, and saturation as a whole tends to suffer. It’s harder to get deep contrasts.

On top of on top of that, your computer screen and the printer disagree on what colour that red even is.

Finally, remember how I mentioned that sometimes lights look cold and sometimes they look warm? Well, this means that depending on where you edited your photo, what time of day you edited your photo, and where you look at the print all matter when it comes to how it’s going to look when you hold it in your hand.

It’s possible to correct for all of this, to get what you see on the screen to match what you see on the paper. But, this is the section of 😡🖥😡. This is the point where you have to decide how much this bothers you and how close is close enough.

This is also the point where you’re totally allowed to go Screw this, black and white it is.”

Correcting for Colour, Part 1

If you’ve decided to go down this road, you’re going to need some things:

  1. A colourimeter
  2. A reference light
  3. A better screen, maybe

The first one is the critical component of managing colour. This is a piece of physical hardware that you stick to your display, and it measures what your screen thinks red” looks like, compared to what it thinks red should be. It uses this information to build a profile, which you apply to your screen while editing. This profile ensures that the image you’re looking at is represented as closely to the agreed-upon colour point as possible. This will usually happen at a white point of 6500K (Bluish, but not too bluish).

You may also need a better screen. Most computer LCDs use TN pixels, which generally only have 6 bits of colour information and use dithering effects to make it look closer to 8 bits. The side effect of this is that they’re harder to get accurate, and you’ll see banding as you edit that won’t be present in the final print. An ideal editing display will use IPS as the underlying pixel technology, and (at the upper end) may even offer wide gamut or 10-bit colour support.

For looking at the print, a reference light is necessary. This will be a bulb calibrated at the same white temperature as the screen, generally 6500K. I use a 6500K LED bulb in a standard desk lamp, which seems to work well enough.

Correcting for Colour, Part 2

So now you have a calibrated display. You can look at images and get a really good idea of how they’ll look when other people see them on their screens, and edit accordingly. It won’t be perfect, but it’s better than it was.

The next part you need is a profile for the printer you’ll be using. Blurb has their colour profiles listed on their site, as do most of the online publishers. If you’re working with a local print shop, you’ll need to ask them for the printer make and model and look up the ICC files, or ask if they have a more recent ICC file they’d like you to use for soft proofing.

These will come as ICC files, which you’ll use in Lightroom and Photoshop soft proofing systems. This will give you a good, albeit not perfect, idea of what the print is going to look like.

Lightroom and Photoshop have great tools for showing you what’s going to be out of range for the printer, and let you see how the contrast and tones are going to shift as a result of the printing process.

If the print shop you’re talking to doesn’t have ICC files, or doesn’t know what they’re for, find another print shop. If they haven’t recalibrated recently, find another print shop. If they offer to have you come in and look at the photos on their Photoshop machine, find another print shop.

Correcting for Colour, Part 3

Once you’ve re-edited your work, it’s time to find out how it actually looks on paper, so you need to order some prints. For Blurb and friends, this may be ordering a complete book. For a local printer, you’ll be able to ask them to run a couple of images off the same printer (… maybe …) for you to look at.

Bring them home. Look at the prints under the reference light. Compare them to your screen in a room lit only by the reference light.

Decide if this is, in fact, close enough. If it’s not, re-edit based on what you see, order more proofs, and try again.

Correcting for Colour, Optional Part 4

The final thing you could do, depending on how much it bothers you, is to get a print colourimeter as well as a display colourimeter. This device is used to measure the colour on a piece of paper, from a reference light (usually inside the device itself).

For the ultimate in colour control, this device is necessary. I don’t have one, and I made the call that I don’t actually care that much about fully accurate colour, and I’m satisfied with close enough”.

Screw this, Black and White it is

I went with black and white for Fly, mostly because I thought it was a better choice for the images, but also because managing the full colour process can be pretty obnoxious.

Black and white doesn’t change the contrast and loss of tonal range, though. You’ll still need to order proofs and edit against what you see.

For instance, I noticed that a lot of my images needed to be brightened considerably in order to not lose detail in the shadows, during print, images that had gone through soft proofing and I thought I’d corrected enough, but were still too dark.

And even then, your proofs might be done on a different printer.

Step 4: Order Some Books

You’ve proofed, or not, and now it’s time to get some books!

If you’re using Blurb or other online services that give you a free web store, you don’t need to order more than one for yourself, and maybe some to gift to friends and family.

If you decided to work with the printer yourself, now you’ll have possibly a box of books! It’s a really amazing feeling to open a box and see a pile of things that you made.

They’ll be slightly different, within the tolerance of how much you might care, from what you thought you’d get. Printing is hard! But you did it!

Step 5: Maybe Some Sales!

This is the part where the dream is easily crushed, and where you learn whether or not you want to do this because you love doing this.

Linear A has sold 17 copies since I released it, and Fly was a limited run of 30 copies, of which 19 sold, one was mine, and one went to the National Library.1

As Distinctly, was run as a PledgeMe campaign, it pre-sold 50 copies for backers to give us the means to make it at all. We have yet to sell any from after the campaign.

Making a book probably isn’t going to make money. It’s probably not going to launch a career as a photographer, especially in the modern world. With Blurb, your up-front outlay is nothing, so you won’t lose money, but will make very little. Depending on your choices, a Blurb book can cost north of USD$50 per book, before you see anything.

With private printing, you’ll be putting a decent amount of money up to do the print run at all. Distinctly needed NZD$1800 just to do the printing for 100 books, handle shipping fulfilment, and other rewards, with none of our time or effort being covered.

We did it for the love.

Fly cost a considerable amount up-front as well, and required multiple back-and-forth’s with the printer, multiple proof runs, and many tweaks to get a final book. In the end, I haven’t made money.

I did it for the love.

Linear A has so far made enough money to buy me a couple of pairs of socks. I’m not kidding, that’s all it’s made.

Again, I did it for the love.

Step 6: And, breathe.

At the end, you’ve done it. You’ve made a work, you’ve discovered more of what your taste looks like and means, what images you find meaningful and worth sharing. You have made something, a real physical artefact in the world that only you could make.

It wasn’t easy. It wasn’t simple. It wasn’t flawless, but nothing ever is.

But it is yours, and no one can ever take that from you.

It doesn’t matter that only a few people ever see it, it doesn’t matter that only your friends bought it. What matters is that you did it.

Congratulations. You’re awesome. Welcome to the club.

Now, let’s go do it again. 😄


  1. Turns out that, in New Zealand, any book needs to be submitted to the National Library. So now I have work in the National Library. How cool is that?