The Particular Finest

Presented by aurynn shaw

More Fun with Terraform Templates

So you may have noticed last time that I said I’m trying to create complex JSON objects from within Terraform.

In this case, I really want to be able to create an AWS ECS container definition, which looks a bit like this (copied from the AWS docs, here)

 {
   "name": "wordpress",
   "links": [
     "mysql"
   ],
   "image": "wordpress",
   "essential": true,
   "portMappings": [
     {
       "containerPort": 80,
       "hostPort": 80
     }
   ],
   "memory": 500,
   "cpu": 10
 }

The important part for me here is making a module to create these JSON blocks. This will let me keep all the variables in Terraform variable files, and ensures that I can interrogate the state file as to what variables are set, and for what container definition.

Ideally, I want the declaration to look something like this

module "container_definition" {
  source = "./ecs_container_defintion"
  
  name = "container"
  image = "hello-world"
  essential = "true"
  memory = 500
  cpu = 10
  port_mappings = [
    {
      "containerPort": 80,
      "hostPort": 80
    }
  ]
}

So, we only have three basic pieces of data to worry about here:

  1. Simple key-value associations
  2. Array of strings
  3. Arrays of maps

As we looked at last time, the jsonencode function can’t deal with an array of maps (or any complex datatype), so we have to unpack this manually.

But, we also can’t use a jsonencode for the basic data pieces either, because making a map that we then encode means we’d end up with a JSON string that we couldn’t expand with the complex data types we need to create.

So that won’t work.

What will work, however, is the bit we used last time, specifically the join(",\n", var.list) that we used. However, instead of using a variable directly, we can instead create the list on the fly using the list() function from Terraform.

Layer 1

That’s set the scene on what we want and how we’ll get it to work. Let’s dig into what it’d look like.

I’m going to skip the variable declarations this time around, and focus on just the data declarations and the resulting JSON blocks.

To start, let’s just have a basic module call, like this

module "container_definition" {
  source = "./ecs_container_definition"
  name = "container"
  image = "hello-world"
  essential = true
}

Three things, completely straightforward. Should be easy.

So, going with our dynamic list and join, what will the template look like? Probably something like this

data "template_file" "_final" {
  template = <<JSON
  {
    $${val}
  }
JSON
  vars {
    val = "${join(",\n    ",
        list(
          "${jsonencode("name")}: ${jsonencode(var.name)}",
          "${jsonencode("image")}: ${jsonencode(var.image)}",
          "${jsonencode("essential")}: ${var.essential ? true : false }",
          )
      )}"
  }
}

So here’s where it’s starting to get a bit, well, not great. As you’ve noticed, each key in the list has to be run through jsonencode, to ensure that it’s properly quoted. The values have to be wrapped in quotes as well, so they’re encoded.

Because we don’t want a list of single-key JSON object strings, we can’t just encode as a list of maps.

var.essential is interesting, as well. Passing a boolean value like true above gets converted to 1 by the module process, so here we just cast it back.

This will probably fail miserably if you pass in the "false" string, instead of the false boolean.

Finally, when we render it, we get

{
  "name": "container",
  "image": "hello-world",
  "essential": true
}

Which looks perfect! Exactly what we want.

Next, let’s add the links array. This one should be easy, because it’s just a list of strings, and we can rely entirely on jsonencode.

"${jsonencode("links")}: ${jsonencode(var.links)}"

Easy. And the output is right, too

"links": ["mysql"]

Port of Call

Next, we’ll add in the port mapping section, the complex part, the array of maps, which is the first point where we need to break out a second template_file to handle the rendering. This is specifically so we can use the count construct to iteratively jsonencode the elements of the list, and then wrap the entire contents in [].

This is going to look something like

data "template_file" "_port_mapping" {
  count = "${length(var.port_mappings)}"
  template = "$${val}"
  vars {
    val = "${jsonencode(var.port_mappings[count.index])}"
  }
}

We can be pretty dense here, as all we’re trying to ensure is that each element of our array has been rendered by jsonencode, and doesn’t require additionally complex actions.

Adding it to our list would be

"${jsonencode("portMappings")}: [
     ${join(",\n", data.template_file._port_mapping.*.rendered)}
]"

which gives us the output we’re looking for:

"portMappings": [
   {"containerPort":"80","hostPort":"80"}
]

Great!

Okay, so, what if we leave things off? image and name aren’t optional, so we just don’t provide a default and let the Terraform compiler handle that case. essential isn’t an essential field, I think we should be able to drop that successfully. Let’s do that.

Hm

Errors:

  * __builtin_StringToBool: strconv.ParseBool: parsing "": invalid syntax in:

Well,

That’s not good. That’ll be the var.essential ? : section, where we try to cast an int into a boolean.

So we’ll need to detect if we’re passing in the default empty string, and do something useful based on that.

But that’s easy! We’ll just another ternary to test it! Something like this

"${ var.essential != "" ? "${jsonencode("essential")}: ${var.essential ? true : false }" : "something" }",

and then we go again, and

Errors:

  * __builtin_StringToBool: strconv.ParseBool: parsing "": invalid syntax in:

oh

it’s evaluating it… twice…

Hrm.

Okay. We can solve this. How about

...
"${var.essential != "" ? data.template_file.essential.rendered : ""}",
...

data "template_file" "essential" {
  template = "<span class="katex"><span class="katex-inner"><span class="strut" style="height:0.75em;"></span><span class="strut bottom" style="height:1em;vertical-align:-0.25em;"></span><span class="base textstyle uncramped"><span class="mord textstyle uncramped"><span class="mord mathit" style="margin-right:0.05724em;">j</span><span class="mord mathit">s</span><span class="mord mathit">o</span><span class="mord mathit">n</span><span class="mord mathit">e</span><span class="mord mathit">n</span><span class="mord mathit">c</span><span class="mord mathit">o</span><span class="mord mathit">d</span><span class="mord mathit">e</span><span class="mopen">(</span><span class="mord">&quot;</span><span class="mord mathit">e</span><span class="mord mathit">s</span><span class="mord mathit">s</span><span class="mord mathit">e</span><span class="mord mathit">n</span><span class="mord mathit">t</span><span class="mord mathit">i</span><span class="mord mathit">a</span><span class="mord mathit" style="margin-right:0.01968em;">l</span><span class="mord">&quot;</span><span class="mclose">)</span></span><span class="mrel">:</span></span></span></span>{val ? true : false}"
  vars {
    val = "${var.essential != "" ? var.essential : "false"}"
  }
}

Eesh. That’s not great. It works, but, yeah, not very well. The first evaluation always takes place, even if the other branch in the comparison is taken. This means that, no matter what, I have to create the essential template node, even if essential is undefined, to pull off this effect.

You may be asking why is she even trying to cast things to true or false? JSON says it’ll just work.”

And the answer is because Terraform tries to be clever, and turns the JSON blob into a struct. Which is strictly typed to expect a bool.

Which means it complains loudly at anything that’s not a literal true.

Fortunately, CPU and Memory should be easy, we can just test if they’re defined inline easily, such as

"${var.cpu != "" ? "${jsonencode("cpu")}: ${var.cpu}" : "" }",

Collapse the List

Rendering out an empty string, "", does have one negative side effect, in that we end up with a JSON block that’s invalid. However, there was a reason I picked the empty string as my return value, and that’s the compact() function in Terraform.

compact() takes an array, and strips out all the items that are empty, so changing cpu above, for example, means the entire

"cpu": 10

line just won’t render if cpu isn’t defined, which is perfect for our dynamic” goal.

Back to Port

Okay, so, back to port mappings.

Unfortunately, due to the complexity of the operation, we need to do the same thing we did with the essential entry to ensure that it is a bool, and break it into its own template_file, like this

data "template_file" "_port_mappings" {
  template = <<JSON
  "portMappings": $${val}
JSON
  vars {
    val = "${join(",\n", data.template_file._port_mapping.*.rendered)}"
  }
}

and address it in the final render like so

"${length(var.port_mappings) > 0 ?  data.template_file._port_mappings.rendered : ""}"

And, lo and behold, it renders correctly.

This has bad idea” written all over it

Of course, the proof is in the pudding: Will AWS accept this as a valid task definition?

After removing links, our final rendered block is

  {
    "name": "container",
    "image": "hello-world",
    "cpu": 10,
    "memory": 500,
    "essential": true,
    "portMappings": [
    {"containerPort":"80","hostPort":"80"}
     ]
  }

which looks agreeably correct, to me, but it’s not me that must be agreeable, but Terraform and AWS.

And the answer is

no.

At some point in this, our map had the port values changed from integers into strings, and Terraform doesn’t cast from strings when deserialising the JSON blob.

sigh fine.

So, each portMapping has three elements: a hostPort, a containerPort, and an optional protocol, where protocol can be either tcp” or udp”.

Because it’s two different kinds of things, we’re going to need another compacting list render step.

OKAYFINE.

OKAYFINE

So, we need more control over the port mapping render. We already broke it out into its own template_file block, so let’s start there.

Because of how we’re using the count.index to create a terraform node array, we’ll have to check for enumeration here. We can’t use the same trick we’re using in the main renderer, where we collapse a list using join, at least not in the same way.

Actually

Maybe we can.

Because port_mappings is an array of maps, we can use element() to pull a variable out of the map during our iteration, and use the default to return a “” in the case that it’s not present.

Which we can then use as our list elements

Which we can then collapse into just the elements that exist

which we can turn into our rendered dict! Like this!

data "template_file" "_port_mapping" {
  count = "${length(var.port_mappings)}"
  template = <<JSON
$${join(",\n", 
  compact(
    list(
    hostPort == "" ? "" : "$\${ jsonencode("hostPort") }: $${host_port}",
    "$\${jsonencode("containerPort")}: $${container_port}",
    protocol == "" ? "" : "$\${ jsonencode("protocol") }: $\${jsonencode(protocol)}"
    )
  )
)}
JSON
  vars {
    host_port = "${ lookup(var.port_mappings[count.index], "hostPort", "") }"
    # So that TF will throw an error - this is a required field
    container_port = "${ lookup(var.port_mappings[count.index], "containerPort") }"
    protocol = "${ lookup(var.port_mappings[count.index], "protocol", "") }"
  }
}

Isn’t it beautiful.

EDIT

I’ve split the $\$ segments above with a \ to prevent a rendering error. Remove the \ to make it work. 😄

Unto the Breach

Okay, our final rendered output looks like

  {
    "name": "container",
    "image": "hello-world",
    "cpu": 10,
    "memory": 500,
    "essential": true,
    "portMappings": [
  {
"hostPort": 80,
"containerPort": 80
}

]

  }

We’re getting a decent amount of spurious whitespace at this point, but we’re just going for a proof of concept. We can clean that up later.

Again, the proof is in the pudding. Will Terraform take this snippet?

YES!

Next Steps

yes

At this point, this module is a very barebones implementation, and doesn’t support the majority of options that the ECS task definition supports, but now that we have a reasonably complete implementation for the basics it should be relatively straightforward to fill out the rest of available options.

Just Because You Can

This is probably an excellent example for that old axiom of, just because you can’t doesn’t mean you should. At the same time, it provides a considerably cleaner interface to the user to work with a container definition, which is an important win.

More than anything, I think it’s amazing to see how you can bend a tool that clearly isn’t designed to generate dynamic JSON into generating dynamic JSON.

Fun with Terraform Template Rendering

One of the things I do as part of Eiara is write a lot of Terraform, an infrastructure definition language, to provide a sensible baseline cloud instantiation of infrastructure and resources.

I’m quite fond of Terraform as a tool, even though it still has a decent number of weirdnesses and edge cases. If you haven’t seen it you should look at Charity’s Blog about Terraform for a great rundown on those issues. It’s quite powerful, and really works well with how I think to enable me to express exactly what I’m thinking.

Part of what I’m doing requires rendering out JSON templates for use with AWS. This is a pretty normal requirement for doing anything with AWS, from the IAM policies to the ECS task definitions. Lots and lots and lots of JSON.

Lots. (lots)

Specifically, right now I’m trying to do is make a list of dicts, where each dict is the representation of a single module, which then gets jammed together to be inserted as a list in a larger block of JSON.

Straightforward, right?

Well…

Gotchas as a Service

First off, the best way to see what’s going on is to write out what’s going on to disk and look at it. Terraform doesn’t directly let you do that, instead requiring an approximation, with something like:

resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${data.template_file.test.rendered}\nEOL"
  }
}

Note the \ns in order to make sure the multiline expands properly. Of course, nothing could go wrong with a rogue template here, but I digress.

But, we can write content out to disk, and check that our JSON blobs are working as expected. Great!

That’s Interesting

Next up, template files. We render them with a straightforward block,

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    variable = "${var.myvar}"
  }
}

and everything is fine.

Interestingly, files that are rendered by the Terraform template system have access to the full range of functions provided by the Terraform interpolation engine. This means that you can use the file() function from inside a template file.

That’s curious. I’m sure nothing bad could happen there.

Complex Data Types and Templates

Back to trying to render my JSON. The first thing I tried was just to plug a list in, and try to render it inside the template, much like this. variable alist” { type = list” default = [1,2] }

data "template_file" "test" {
  template = "${file("./tpl.json")}"
  vars {
    alist = "${var.list}"
  }
}

Unfortunately, Terraform doesn’t, as of 0.8.7, let you pass complex types into the template renderer, so that doesn’t work.

However, if we use join(",", var.alist), it’ll render much as we expect it to for numbers.

{
  "list":[1,2]
}

What about if we use strings?

variable "alist" {
  type = "list"
  default = ["a","b"]
}

Output:

{
  "list":[a,b]
}

Well, that breaks. But! We have the jsonencode() function, which returns blocks of JSON. Great! We can render our list arbitrarily!

List of Strings of Rendered JSON

But the goal here is to drop a list of rendered blobs of JSON into our template. How does that hold up with jsonencode ?

variable "alist" {
  type = "list"
  default = [<<EOF
{"foo": "bar"}
EOF
,"b"]
}

Output:

{
  "list":["{\"foo\": \"bar\"}\n","b"]
}

Hm. That’s not good, but, surprisingly, we can use HEREDOCs inside a list declaration.

Neat. But, I digress.

What about a nested map? Will that work?

variable "amap" {
  default = {
    foo = {
      baz = "bar"
      beez = ["a"]
    }
  }
}

Output:

Errors:

  * jsonencode: map values must be strings in:

${jsonencode(var.amap)}

Hrm, not directly.

And we can’t pass complex types into templates.

We could use string literals, but then we’re not able to pass in an arbitrary number of elements through our list. Also, since we’re expecting to return rendered bits of JSON from our modules, this is just going to wrap things in strings, which isn’t what we want anyway.

Okay, What About

So if all I want to do is make a list of dicts, I should be able to render to JSON the dict initially, and then just join the properly rendered JSON blobs with a comma.

Let’s test that.

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    alist = "${jsonencode(element(var.alist, count.index))}"
  }
}
resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${join(",\n", data.template_file.test.*.rendered)}\nEOL"
  }
}

Output:

{
  "list":"a"
},
{
  "list":"b"
}

Okay that’s really close! We’re rendering into templates and then just joining it together with a ,\n and it appears to be what we want.

So in order to get it to look right, we’ll need to wrap it in an additional template_file to add the [] pair that we need to have a proper list of dicts, such as

data "template_file" "test" {
  count = "${length(var.alist)}"
  template = "${file("./tpl.json")}"
  vars {
    alist = "${jsonencode(element(var.alist, count.index))}"
  }
}

data "template_file" "test_wrapper" {
  template = <<JSON
[
  $${list_of_dicts}
]
JSON
  vars {
    list_of_dicts = "${join(",\n", data.template_file.test.*.rendered)}"
  }
}

resource "null_resource" "export_rendered_template" {
  provisioner "local-exec" {
    command = "cat > test_output.json <<EOL\n${data.template_file.test_wrapper.rendered}\nEOL"
  }
}

Output:

[
  {
  "list":"a"
},
{
  "list":"b"
}
]

That’s really close! The indentation is a bit off, but Python can read it!

Complexity

This is obviously a bit of a weird, complex case. By trying to hide the abstractions of JSON blocks that represent, in my case, ECS Container Definitions, I’m requiring other places in my code to craft the correct JSON blobs.

But it also feels like good programming practise to build abstractions on top of things like container definitions and provide a cleaner interface to the components I’m working with. And because I’m building two abstractions, one for the container definition and one for the task definition, I can hide the nesting of templates and, in the end, pass a list of module outputs to a module and have it do The Right Thing.

And that feels right.

Trusting Trust

The tech industry is an interesting place. On the one hand, we have the old guard, the 80s-driven hacker culture” that drips with contempt culture and ivory tower silo mentalities.

On the other hand, we’re seeing a revolution in our culture in the form of DevOps. a DevOps is interesting in that it looks like it’s about tools, but in a complete head fake we see that nothing about DevOps is about our tools, but instead everything about how we work and how our teams function, both internally and externally.

The mechanisms of contempt culture drive us towards silos, islands of cultural isolation where our tools and ways are the best”, and anyone outside is lesser. Less smart, less capable, less worthwhile as a human being. These silos have been the dominant force in the tech industry almost since its inception, from the original MIT lusers” insult through the self-fulfilling prophecy of PHP is bad”.

DevOps, by contrast, is a cultural push to dismantle the silos between developers and operations. We recognise these silos are harmful or reductive, and place us in combative situations where we are unable to achieve our goals. We’re learning to trust each other, even if we don’t fully. We still expect the mechanisms of contempt to rear their head, for people to treat us badly for our ignorance or failures.

But these silos are more than just the silos between ourselves, they are the silos that isolate IT from the rest of the company, and isolate our communities from the broader world around us.

We don’t trust our users, the people that our jobs exist to serve. We still treat them as an isolated silo, as lesser, as knowing less than we do and consequently being unable to contribute.

They don’t code, after all. Their contributions can’t possibly matter.

More importantly, more harmfully, people outside our silos don’t trust us.

The Untrusted Many

We all have stories about the users we hold in contempt. The people who click links in their email (as though we don’t). The people who don’t take care of their systems the way we think they ought.

The people who choose to use devices that just work, and our contempt flows from us that they would dare use tools that enable their goals without our permission and gatekeeping.

I use gatekeeping” intentionally, for that is what we do. We blame the victims of our failures, accusing the user of clicking the wrong link, not running the right antivirus, of buying the wrong device. We make them ashamed of asking for help, insisting that they should have known better and that they are not worth our time, and we demand obsequience before we will lower ourselves to helping them.

And we say we don’t do this.

The broader public doesn’t trust us because we are cruel, isolationist and harmful not just to them but to each other. Through language that demonstrates our contempt for outsiders, such as RTFM, PEBKAC, and ID10T errors, we assert that our skills are the only ones that matter. Our central tools, such as GitHub, spent years treating code as the contribution. We treat the clients that pay for our time with contempt because they don’t know how to ask, or express what their goals are.

We may claim otherwise, but culture is not what we say, it is the actions that we permit.

We permit the Linux kernel community to be a toxic wasteland, a technology which is a vital driving force of modern computing. We say we do not value contempt and hostility, but our actions show otherwise.

Outside of our community, we are untrusted. We are the risk, we are the toxic person to many people who know us. We may never have performed contempt culture, never have shamed our family, friends or colleagues, but we benefit from contempt culture. We benefit from the idea that all IT workers are hostile, that the technical are wizards” and that they should come to us in supplication.

That when they come to us they should beg, and make us feel superior before asking for our help.

I’ve seen this countless times. People become deferential to me when I say I work in technology, because they are so used to people in tech treating them badly, of being made to feel bad for their ignorance.

Empathy Systems

DevOps is starting to change this world, for the better. As a head fake, we are offered new and shiny tools to help us do things better, tools that fit well into the Agile workflows that dominate our industry. We discuss how our software fits into the broader ecosystem of a reliable deployment, how we can better deliver the goal of working software”.

As a dev, there’s nothing more satisfying than delivering software, of making something work and delighting those who have asked my help.

But what we’re learning with DevOps isn’t tools, it’s empathy. It’s teaching us what silos look like, and as a dev, teaching me that ops people don’t trust me, they see me as a risk, a threat, a source of anxiety.

DevOps is teaching our culture to see that our isolationist attitudes prevent doing great work.

This must not be the only step, but the first step. We are discovering empathy and with it how to look at other points of view, other needs and concerns, and delivering better software” as a result.

But our definition of better remains flawed, incomplete and hostile, because our culture still isolates us from our users of our software. Better remains better for us, for the elite, where we deliver our features faster but still demand obsequience and reject an understanding of what our users need, of what their goals are.

Improvement Plans

It doesn’t have to be this way. DevOps is showing us how to respect other needs, other goals and priorities. It is teaching us a culture of respect and inclusion, but only for those who already share our elitism.

We can go further. Entire fields of study exist to understand user needs, to improve their experiences and deliver tools that aren’t just stable, or easy to deploy but delight and inspire, to make people feel great and powerful on their own, without the false ideal that they must be able to code.

We must choose to admit that our culture is exclusionary, that the system of it performs exclusion, and that we think this is ok, and we must choose to explicitly act to include those who have been excluded. The design experts, the writers, the user experience experts. The people whose skills make a project live, the people we look down upon. The people who aren’t like us, who show us different needs that we keep overlooking.

The users, the very people we make these tools to serve.

And just maybe, we can rebuild the trust that has so long been shattered, build communities that respect all needs, and work to serve more people.

Human Driven Development at LCA

Hey!

So, in case you missed it last time, I was honoured to be invited to keynote WOOTConf at Linux.conf.au on Monday, and was able to give an updated version of my talk Human Driven Development”

You should go watch it! It’s great!

Secondary Effects

So, we all know about contempt culture nowadays, the effect that we’re building status on top of displays of contempt and showing that we’re holders of the right” knowledge.

But this has some unfortunate side effects that result, that I’ve been starting to notice.

Contempt culture doesn’t just encourage us to shame, dismiss and behave contemptuously towards outsiders, it also encourages us to shame and dismiss each other within the community.

The Things You Didn’t Know

You may have seen this effect with You didn’t know that?!” style responses to people’s ignorance or questions. This style of response is suppressive, and encourages people who don’t know what’s being talked about to stay quiet, to not ask, to withdraw. It’s a shaming act, a demonstration that you don’t belong because you didn’t know.

But, when this is done, we’re binding the status of belonging to the group to how few questions we ask. Asking is an opportunity for mockery, for responding in (possibly mock) shock regarding your ignorance.

How could you be here without knowing that, after all?

Because belonging is part of how few questions we ask and positioning ourselves as knowing things even if we don’t, newcomers to our communities don’t see a community of people who seek knowledge and admit their ignorance.

Instead, they see a community of people who profess expertise and, through contempt culture, are positioning their biases and contempts as the result of that expertise.

Because newcomers don’t see people asking questions, and are shamed for asking questions by those questions being seen as a challenge to their right to belong, they are trained to behave in the same way.

Impostor Syndrome

This is one of the causes of impostor syndrome. People caught in this coupling of ignorance as status will never feel, internally, like they belong. They will always be caught in needing to show that they know everything but afraid of asking, of being caught out as a fraud, of the challenging, harm-filled You didn’t know that?!”

This is the feeling of being an unwelcome impostor.

This makes our communities quite hostile and difficult to participate in, because we spend our time afraid and uncertain instead of open and participatory. We perform contempt culture, reinforce that we all hold the right knowledge, and questions that imply we don’t know things are shut down as fast as possible.

Solution Strategies

Solving this isn’t an easy process.

As a community, it requires calling out behaviour where people are behaving as though you must have known something.

It requires luminaries in the communities always being vulnerable, and continually admitting that they don’t know things, don’t know why. It requires that everyone dismantle contempt culture by asking why?”. Why did they use that technology? Why did they make those choices? What are the surrounding requirements that informed these decisions? There are always reasons why things are done the way they are, and until we understand it we cannot usefully comment on the why.

It requires tools like a Code of Conduct, such that participants have can request help when they are excluded in non-public spaces, tools which describe the standards of behaviour and insist upon their adherence.

More than anything else, it requires caring about being explicitly welcoming to everyone, and interrogating your culture and truly, painfully asking why it isn’t.

These questions are hard, but they’re necessary.

There Is No Meritocracy

Tech culture idolises the idea of the meritocracy, the mythical organisational strategy where those of skill and capability rise to the top, and lead naturally”. Of course, being tech this means that those of great technical knowledge and coding skill are the most meritorious, deserving of our recognition and adulation.

But, meritocracy is just an unacknowledged bias. When you say good at coding”, what you mean is that they have your background, value your values, prioritise like you prioritise. They have your ability to share on GitHub, your spare time to contribute, and make decisions that look like your decisions.

Unacknowledged bias looks like The best devs spend their holidays coding”.

This bias devalues the idea of other perspectives being meritorious. It insists that other views can never be good enough.

How could they? They don’t fit what you know good enough” looks like, so you don’t have to think, or question, or challenge.

You know, so you can remain ignorant, deny that questions need to be answered, let alone exist.

This bias means that people who can’t act like you can never be meritorious. How could they? They’re not spending their holidays coding like those with merit, they have other responsibilities and commitments.

There are no Questions

Thinking of the meritocracy in this way isn’t the normal way of considering it, as when we challenge what the underlying values are, what the implications and ramifications are, we are showing ignorance.

As we know, contempt culture bases itself on displays of contempt and reinforcing pre-existing group knowledge, and according status on adherence to that demonstration.

This has some side effects, like the bitter knife of impostor syndrome. Asking for knowledge or help is the domain of the lesser, those who are not elite. It works such that impostor syndrome is a natural result, as those who are able to answer questions so quickly are looked up to, lauded as the luminaries in the communities. This reinforces that because we don’t know, we aren’t looked up to, that we don’t belong here like they do.

They are the wizards, and we are not.

When we feel like we’re incompetent or don’t belong for asking questions, we don’t (can’t, even!) challenge the ideas within the culture. Asking what the side-effects of the meritocracy are just. isn’t. done.

There are no Answers

This attitude acts to suppress introspection and questions.

We are prevented from asking questions through the fear our culture instills of our own ignorance, through the backlash that arises when our culture is questioned. Instead, questions to the status quo cannot be heard, or even permitted to exist.

This attitude gets reinforced every day by tech culture. Hacker News suppresses social discussions and conversations where perspectives other than the ones we already have can be examined.

This is an ideology of contempt culture, of confidence being status, that we know enough about social issues already, we know enough about the impact of our actions already.

It is an attitude of continually reinforced ignorance that rewards participants for their complicity.

Here is your Reward

We’re rewarded for our complicity with a sense of belonging. As we know, this is how social norms and mores propagate at all, how we teach children what’s acceptable in society, how we tell each other what we should and shouldn’t do.

In tech culture, belonging is coupled to the rejection of introspection and questioning why we do what we do in the way we do it. When we behave this way, people offer us their support either through their silence and their overt support.

We get to belong, to feel the wonderful endorphin rush of being included.

We push it further because if we don’t then we’ll be seen for the fraud we are, as impostor syndrome whispers such believable lies in our ears.

The Glorious Temptation

A lot of why I initially participated in contempt culture was driven by wanting to belong. Like so many, I was bullied in school and didn’t have a supportive home life, resulting in my withdrawal into computers.

I belonged, there. Videogames never questioned whether I got to play too, they just ran, and I got to play. They never made fun of me for my body, or who I was or wasn’t. I was never judged.

Contempt culture was how I belonged to those communities early in my tech career. Showing that I knew the right things, to show that I was the Right Sort of person, not one of those horrible lusers”.

That I hadn’t belonged or fit in for so long meant that now that I finally did it meant so much to me and I was so thrilled that I would have done anything to keep feeling it, to keep it being true.

There Are No Consequences

Asking me to have considered the consequences of my actions, if my behaviour made others feel like they weren’t welcome?

I would have felt like you weren’t just questioning my actions, that you were questioning my very ability to belong, because this is how I showed that I belong.

So, I parroted the lines. I said that you should be coding more, immersed in your work. I refused to consider the consequences of how that would exclude people, because I couldn’t focus beyond my own fear of exclusion.

I could not allow myself to accept the consequences.

It was all I knew. It is all we know.

Meritocracy

I say that the meritocracy doesn’t exist, because it is a parroting of a culture that surrounds us with ideas that tell us that we are permitted to belong, because we fit the right pattern.

We don’t look at the consequences, because looking at them challenges that we deserve to belong at all, challenges our own ability to think of ourselves as good people.

It does not and cannot allow itself to be challenged, because it challenges our own self-worth, our own belonging.

It does not and cannot exist, because we use it to mean like me. Like me, which means nothing except is like me, not capability or intelligence or skill, just that someone does or does not have a history that looks like mine does.

No merit involved.

I experienced this, when I started to question my own ideas around the meritocracy and what being good” meant. I remembered so many things I’d done, things that I cannot ever undo, or even apologise for, that now I am horrified to have done.

Where was the merit in my shouting out into a meetup to get a real language”? I was pushing that those of skill and competence should ignore this person, this company, this technology.

In tech culture, be it on mailing lists, IRC, on everything else, my behaviour back then is the meritocracy of now. It is shouting into the dark that I know better, and that you do not belong,

you never will,

because you lack merit.

How to Make a Book of Photography

So after my recent(-ish)((6 months is recent, right?)) publication of Fly, A Collection, I wanted to talk about the process of making a book of photography.

Fly is my third work, following Linear A and Distinctly Coromandel. All three went through different publication routes, but contain a lot of similarities that are worth discussing.

So, how does one publish books of photography? Well, as much as others might say otherwise, it’s pretty straightforward.

My experience is entirely around self-publishing and developing my own publishing workflows, both by myself and with others.

Step 1: Yes, You’re Good Enough

The major blocker to publishing a work is the belief that you’re not allowed to, that you’re not good enough to do so, or that your work just isn’t worth showing off.

Your brain is lying to you.

Publishing a work is, in a lot of ways, a big deal. It’s not just judging your own work, but putting it out there in such a way that others will also judge it and hold it to their own critical eye. You’ll get feedback, and it won’t always be positive.

More than that, you’re overcoming your own sense of taste. Ira Glass said this brilliantly when discussing the creative process. The work you put together is something you will not be happy with. This is a huge barrier to get over, a hump that will give your brain ample opportunity to tell you that because your work doesn’t live up to your expectations, it won’t live up to other peoples’ either.

Again, your brain is lying to you. Yes, your work is going to disappoint you in some ways, but others will not see that disappointment. Instead, they will see the result of your hard work: the finished piece, complete to the best of your skills at the time, and something that maybe they’re not yet brave enough to make.

They’ll see achievement, not disappointment.

Step 2: Making a Collection

The current internet age has had some interesting side effects with regards to photography. On the one hand, photography is democratised to a point where we have amazing cameras in our pockets all the time. This is amazing, powerful, and a magical world where we can document so much, share so much, and build a collective view of our world in real time. I love it.

On the other hand, because we take so many photos and so many photos get uploaded so frequently our streams are often sequences of beautiful images that lack cohesion or continuity, and it becomes harder to think in terms of a collection of your work, your vision, and your taste.

Here’s some pretty images I took” isn’t a theme. Like your stream, the lack of a contiguous theme will make the final product feel messy, and it will be a lot more difficult to find a point of completion.

Find a theme.

Your theme will something you want to present through your work. Linear A grew out of my interest in minimalist images, capturing the way lines work within photographic frames. Fly recaptured the magic of flight, to reclaim it from the misery of airports and long haul and rejoice in the majesty of our world seen from above. Distinctly captured the beauty of the Coromandel, and the ways that humans have irrevocably changed the land.

Themes aren’t always immediately obvious in your master collection. They may only be visible after you’ve dug through the images for a while, categorising and sorting and deciding what belongs where.

Once you’ve found your theme, you’ll probably discover that you don’t have enough pictures. Fortunately, this is a great excuse to go take more pictures, and may be the impetus you need to get out the door and start shooting stuff again.

This step is really, really hard.

For Fly, this step took 5 months, going from almost 2000 images to the 28 that made it into the book. This won’t be an overnight process, and you will need to often to refer to Step 1, believing that you both are good enough and that your taste is good enough to do this.

Step 3: Finding a Printer

So you have a collection you’re not too unhappy with! Congratulations, you are further ahead than most get. You’ll feel like you need to keep refining it, trying to make it better, improve what you have.

Stop. It’s done.

Yes, you can keep polishing and keep improving, but there must be a point where you let it go and bask in the achievement of creation. Not releasing means you can’t take what you’ve learned and try again, as the current work remains unfinished”.

Not only that, the getting it ready to print” stage will take a lot more effort than you realise.

There’s two major ways of approaching this:

  1. Easy, Online Print on Demand
  2. Dealing with a local printer yourself

Easy, Online Print on Demand

This will be a service like Blurb or Snapfish or a number of others. They offer tools and super easy integrations to make it really easy to make printed photos happen.

Lightroom integrates with Blurb, and they also have their own make an book thing!” app, if you don’t use Lightroom, or want more control. This is great, because you can just drag and drop images into the layouts, push a button, and you’ll get a copy through the post a couple of days later.

It’s pretty magical.

The other ones are equally easy to work with, though lack the close Lightroom integration that Blurb offers.

A Local Printer

Working with a local printer is considerably harder.

You’ll need to do a lot of the pre-press work yourself, handling layout with something like Indesign, and produce a file for the printer to work from. It’s not particularly more onerous, but it’s much less drag-and-drop easy than working with major publisher software toolchains.

Local printers usually also prefer to work at a larger scale than Blurb or others, requiring you to purchase more than a single book. Blurb is happy selling you individual books, and dealing with any fulfilment themselves.

The advantage is you get a lot more control over the print process, from paper selection to ensuring that your prints happen on a particular printer with particular inks.

For Linear A, I used Blurb to handle the printing, and I’m really happy with the quality of the books. For Distinctly, my co-author arranged the print with a print shop local to him. For Fly, I worked with a print shop directly to manage the print process, and we discussed paper weight, size, and other items.

Step 4: Colour Theory and Tears

Either way, now you have A Book! Your thing! You made it! You actually really made it! YOU MADE A THING. Twitter and Facebook that thing. It’s yours.

And then you’ll notice that all the colours are wrong. What. The printer screwed up your amazing work!

Well,

no.

What you’ve just discovered is that your eyes are great big liars, computer screens are liars, and paper is what is this I don’t even.

Welcome to the miserable land of colour theory.

You’re probably already aware of white balance, if only passively. Some light bulbs look warm”, right? And some look cool” or cold”. This is white balance in action, where the colour white” isn’t actually ever white, it’s just perceptually white because your vision system is out to mess with you.

On top of this, what your computer screen is showing you is red isn’t actually red. Or green, or blue. Because our vision system is adaptive, we don’t notice that it’s not real red, it’s just red until we have a comparison, at which point we can see how red it isn’t.

On top of that, what colours a printer can represent are different from what colours a screen can represent. You’ll start to hear terms like gamut” and colour space”, describing what you can get onto the paper at all. Different printers and inks will have different capabilities, too!

Intense shades tend to get lost, being clipped back to dimmer, less saturated versions, and saturation as a whole tends to suffer. It’s harder to get deep contrasts.

On top of on top of that, your computer screen and the printer disagree on what colour that red even is.

Finally, remember how I mentioned that sometimes lights look cold and sometimes they look warm? Well, this means that depending on where you edited your photo, what time of day you edited your photo, and where you look at the print all matter when it comes to how it’s going to look when you hold it in your hand.

It’s possible to correct for all of this, to get what you see on the screen to match what you see on the paper. But, this is the section of 😡🖥😡. This is the point where you have to decide how much this bothers you and how close is close enough.

This is also the point where you’re totally allowed to go Screw this, black and white it is.”

Correcting for Colour, Part 1

If you’ve decided to go down this road, you’re going to need some things:

  1. A colourimeter
  2. A reference light
  3. A better screen, maybe

The first one is the critical component of managing colour. This is a piece of physical hardware that you stick to your display, and it measures what your screen thinks red” looks like, compared to what it thinks red should be. It uses this information to build a profile, which you apply to your screen while editing. This profile ensures that the image you’re looking at is represented as closely to the agreed-upon colour point as possible. This will usually happen at a white point of 6500K (Bluish, but not too bluish).

You may also need a better screen. Most computer LCDs use TN pixels, which generally only have 6 bits of colour information and use dithering effects to make it look closer to 8 bits. The side effect of this is that they’re harder to get accurate, and you’ll see banding as you edit that won’t be present in the final print. An ideal editing display will use IPS as the underlying pixel technology, and (at the upper end) may even offer wide gamut or 10-bit colour support.

For looking at the print, a reference light is necessary. This will be a bulb calibrated at the same white temperature as the screen, generally 6500K. I use a 6500K LED bulb in a standard desk lamp, which seems to work well enough.

Correcting for Colour, Part 2

So now you have a calibrated display. You can look at images and get a really good idea of how they’ll look when other people see them on their screens, and edit accordingly. It won’t be perfect, but it’s better than it was.

The next part you need is a profile for the printer you’ll be using. Blurb has their colour profiles listed on their site, as do most of the online publishers. If you’re working with a local print shop, you’ll need to ask them for the printer make and model and look up the ICC files, or ask if they have a more recent ICC file they’d like you to use for soft proofing.

These will come as ICC files, which you’ll use in Lightroom and Photoshop soft proofing systems. This will give you a good, albeit not perfect, idea of what the print is going to look like.

Lightroom and Photoshop have great tools for showing you what’s going to be out of range for the printer, and let you see how the contrast and tones are going to shift as a result of the printing process.

If the print shop you’re talking to doesn’t have ICC files, or doesn’t know what they’re for, find another print shop. If they haven’t recalibrated recently, find another print shop. If they offer to have you come in and look at the photos on their Photoshop machine, find another print shop.

Correcting for Colour, Part 3

Once you’ve re-edited your work, it’s time to find out how it actually looks on paper, so you need to order some prints. For Blurb and friends, this may be ordering a complete book. For a local printer, you’ll be able to ask them to run a couple of images off the same printer (… maybe …) for you to look at.

Bring them home. Look at the prints under the reference light. Compare them to your screen in a room lit only by the reference light.

Decide if this is, in fact, close enough. If it’s not, re-edit based on what you see, order more proofs, and try again.

Correcting for Colour, Optional Part 4

The final thing you could do, depending on how much it bothers you, is to get a print colourimeter as well as a display colourimeter. This device is used to measure the colour on a piece of paper, from a reference light (usually inside the device itself).

For the ultimate in colour control, this device is necessary. I don’t have one, and I made the call that I don’t actually care that much about fully accurate colour, and I’m satisfied with close enough”.

Screw this, Black and White it is

I went with black and white for Fly, mostly because I thought it was a better choice for the images, but also because managing the full colour process can be pretty obnoxious.

Black and white doesn’t change the contrast and loss of tonal range, though. You’ll still need to order proofs and edit against what you see.

For instance, I noticed that a lot of my images needed to be brightened considerably in order to not lose detail in the shadows, during print, images that had gone through soft proofing and I thought I’d corrected enough, but were still too dark.

And even then, your proofs might be done on a different printer.

Step 4: Order Some Books

You’ve proofed, or not, and now it’s time to get some books!

If you’re using Blurb or other online services that give you a free web store, you don’t need to order more than one for yourself, and maybe some to gift to friends and family.

If you decided to work with the printer yourself, now you’ll have possibly a box of books! It’s a really amazing feeling to open a box and see a pile of things that you made.

They’ll be slightly different, within the tolerance of how much you might care, from what you thought you’d get. Printing is hard! But you did it!

Step 5: Maybe Some Sales!

This is the part where the dream is easily crushed, and where you learn whether or not you want to do this because you love doing this.

Linear A has sold 17 copies since I released it, and Fly was a limited run of 30 copies, of which 19 sold, one was mine, and one went to the National Library.1

As Distinctly, was run as a PledgeMe campaign, it pre-sold 50 copies for backers to give us the means to make it at all. We have yet to sell any from after the campaign.

Making a book probably isn’t going to make money. It’s probably not going to launch a career as a photographer, especially in the modern world. With Blurb, your up-front outlay is nothing, so you won’t lose money, but will make very little. Depending on your choices, a Blurb book can cost north of USD$50 per book, before you see anything.

With private printing, you’ll be putting a decent amount of money up to do the print run at all. Distinctly needed NZD$1800 just to do the printing for 100 books, handle shipping fulfilment, and other rewards, with none of our time or effort being covered.

We did it for the love.

Fly cost a considerable amount up-front as well, and required multiple back-and-forth’s with the printer, multiple proof runs, and many tweaks to get a final book. In the end, I haven’t made money.

I did it for the love.

Linear A has so far made enough money to buy me a couple of pairs of socks. I’m not kidding, that’s all it’s made.

Again, I did it for the love.

Step 6: And, breathe.

At the end, you’ve done it. You’ve made a work, you’ve discovered more of what your taste looks like and means, what images you find meaningful and worth sharing. You have made something, a real physical artefact in the world that only you could make.

It wasn’t easy. It wasn’t simple. It wasn’t flawless, but nothing ever is.

But it is yours, and no one can ever take that from you.

It doesn’t matter that only a few people ever see it, it doesn’t matter that only your friends bought it. What matters is that you did it.

Congratulations. You’re awesome. Welcome to the club.

Now, let’s go do it again. 😄


  1. Turns out that, in New Zealand, any book needs to be submitted to the National Library. So now I have work in the National Library. How cool is that?

That PHP Graph

So by now you’ve probably seen this graph bouncing around the tech conversation in the last couple of days. It’s interesting data science! It’s a great way to see the sorts of trends around how people program and how people are learning to program.

You may have also encountered this idea of contempt culture that I’ve spoken about earlier, where tech communities use on demonstrating contempt towards tools outside what’s acceptable” in their group as a proxy for belonging to that group.

One of the biggest ways that’s manifested in my career has been a vicious contempt of PHP.

You should be able to see where I’m going with this. Contempt culture tells us to hate PHP, everyone knows” that PHP is bad and that PHP programmers are bad, and now we have some data science that backs it all up!

I haven’t seen it directly, yet, but this sort of data science is exactly what participants in a contempt culture thrive on. It’s data. It demonstrates that people who write PHP really are worse or less intelligent but most definitely don’t belong, and we have every right to be contemptuous and cruel towards them.

The data supports it, after all.

There’s a wonderful saying that covers this beautifully:

Explanations exist; they have existed for all time; there is always a well-known solution to every human problem — neat, plausible, and wrong

  • H.L.Mencken

Well, Actually

So there’s a couple of things about people in tech that are relevant here, namely that we are blazingly incompetent at cause analysis and understanding the consequences of our actions.

Let me explain.

What are you even doing

The one is the most bizarre to me. As a programmer, my entire job is doing cause analysis through debugging and finding out why things are failing, and asking very specific why”’s as a service.

But using that same set of skills and abilities to examine the cultures around us is apparently so horrifying to even consider that it is rejected out of hand, even where we have an oral record of misery and despair, like dysfunctional employment environments.

We know these tools are powerful, because we use them every day. We know we can do amazing things, because we do amazing things every day, but we refuse to just use the tools.

Action begets Reaction

The second one is the strongly held belief in tech that we don’t need to examine or consider the consequences of our actions. This is visible with an example that came out today, where Hacker News openly admits to censoring anything related to diversity.

In a culture where it’s already normal to not care what our technical choices will do to others, this provides reinforcement that we will never have to.

ASK WHY ALREADY

So the major question with that data that I have is why. Not why are they asking on Stack Overflow”, or why are they using PHP, but why did they learn to code this way”.

That is the giant neon sign question that comes out of this data, the fiery inferno of something is very wrong here.

So, why?

Well, let’s add some framings. One, tech culture is highly contemptuous of PHP, a state that traces itself back to Perl’s CGI/Web dominance and relevance being eroded, and the attendant contempt culture that reinforced. This has the effect that if one is trying to learn PHP, either to make their own website, or learn what they need to work with Wordpress, they are made to feel awful by anyone they discuss it with.

So, they’ll tend to puzzle it out on their own, working with some tutorial material they find online.

Ah-hah!” I imagine you saying, preparing to stop and tell me that the tutorials are awful and that these new programmers should know better.

And I have one response.

 How?

These people are making a rational choice to learn to work with, to take one example, Wordpress, which is one of the biggest projects around. There’s a huge market for making themes and providing plugins for Wordpress, why would I not want to be a part of that?

But we’re not considering the consequences of our actions. We act like contemptuous jerks, they wisely disengage, and then we use that disengagement and attendant insecure practises to reinforce our own contempt.

We don’t consider why people do what they do, and take into account all the inputs, and I say that because this has been happening to PHP users since the 90s.

We, technologists, programmers, all of us, through the adherence and perpetuation of contempt culture, drove early PHP programmers out by making them feel bad. So they built their own communities, and wrote tutorials, and learned on their own. Those cultural artefacts are still around, and we can see their effect in the data in front of us.

People don’t want to learn from us because they don’t want to be around us, and we mock them when they ask us for help. To this day.

Better Consequences

This data is a wake-up call. It’s a canary that tells us that our culture is poisonous, that we are not teaching people how to act securely, that we are pushing people outside our ability to help.

We are not making a better world. We are refusing to look at the consequences of our actions.

When we try to say we’re nice now, that we’re approachable and won’t bite is hollow and meaningless, because we’ve spent a lifetime being proactive with our contempt and hostility.

Which means we have to be proactive to fix it. We have to care and reach out, we have to do the work, because ultimately we’re responsible for the situation we’re in.

Call to Action

Our culture is to blame, our culture of me, of you, of everyone who’s ever bashed PHP or its users.

But pointing fingers doesn’t help, we just get into another cycle of demonstrative contempt where I can assert that I am better than you because I didn’t do it as much.

It also doesn’t make the code secure, or help that they made these mistakes.

So how do we become proactive? How do we actually help?

Programming, tech as a whole, is a service entity. We exist to support and enable. You have knowledge on how this is harmful, and they don’t. You can help fix it, but not by being an ass about it.

So there’s a three step process to doing something constructive.

  1. find all the people around you who work with PHP, who have had to endure contempt culture, and apologise for perpetuating it. Really mean it.
  2. Humbly offer to help.
  3. Humbly actually help

You’re not here to show your superior knowledge or to shame people for not knowing what you know. You’re here to help others learn and grow, to show them that they’re not bad for not knowing, but that it can be harmful.

That there can be consequences.

So do the work. Reach out. Help your friends, acquaintances, neighbours. We can make the world better.

We can be better than what we are.

We just have to try.

Watching me Talk

Hey cool people!

Back in late July, I was honoured to get to speak at WDCNZ here in Wellington, and my talk was on the culture of technology and how contempt culture damages our communities and creates hostile and unpleasant environments.

I think it’s a great talk, and I’m honoured to be able to share the video with you today, right here, right now

A Collected Set of Data

Need

As a business aurynn, I keep finding out about conferences far too late to either submit to them or even attend.

Case in point: NDC in Sydney next week (Aug 1-5) that I found out about on Jul 28.

This. Keeps. Happening.

and it is frustrating.

Overarching Problem

I have not yet found a single point where I can go to discover AU/NZ tech conferences that are upcoming, or have open CFPs, or have opened ticket sales. I have yet to find anything that regularly issues reminders of conferences I may wish to be interested in.

There are a couple of data sources (Lanyrd, for instance) that cover the some of the necessary data, but as they are diverse data sources I don’t often think to go digging.

Even when something crosses my radar, it doesn’t stick the first time, and I will often forget about it.

Solution

A potential solution for this is a curated, human-run website which offers a data feed describing upcoming conferences, conferences with open CFPs, and conferences that have been announced.

There would be no automation around conferences being published to the blog or the mailing list.

The mailing list would be managed via a tool similar to MailChimp.

A human would vet each conference for suitability by the (at least) following criteria:

  1. The conference MUST have a Code of Conduct.
  2. The conference SHOULD be a full resolution process as part of the Code of Conduct.
  3. The conference MUST be applicable to the audience (Technical persons)
  4. The conference MUST have a commitment to outreach to diverse speakers and attendees. Predominately white-men lineups are not acceptable.

Comms strategy

Weekly

A post including:

This post would happen at most once a week.

Monthly

A post including

This post would happen at most once a month.

This places the mailing list or RSS feed at no more than 5-6 posts per month.

Post style

In each section above(New, CFP, ticket sales), a post should only have

to ensure that conference details are all fully up-to-date.

Monetisation

Assuming this hits a point where it needs to have itself paid for (IE it turns into a job and not just an aurynn-is-annoyed-at-the-things), monetisation should happen via a mechanism like The Deck:

We offer up to 4 sponsorships on the mailing list. Each one gets its own weekly post per month dedicated to their conference. This may be included in the normal weekly posting, or, a dedicated post for that conference. (TBD). In the event of an inclusion in the normal weekly, a paragraph dedicated to that conference would be made available.

All paid sponsorships would be included in the monthly post.

Monetisation WOULD NOT be pushed on any conference. It would be a mechanism to provide additional support, not as a means to badger conferences for money in exchange for greater publicity.

So!

Does this sort of data-source/site sound like it’ll be useful? It seems to cover my needs and cover the sorts of conferences I want to attend, as well as excluding conferences which reinforce contempt culture (anti-CoC, for instance).

It also provides a mechanism for regular reminders of conferences, to ensure that they don’t fall off my radar.

So, give me your thoughts and opinions! With, of course, the following caveats: