Please enable JavaScript to view the comments powered by Disqus.
Select Page

My Top Five Talks from Conj 2015

In mid-November, I had the opportunity to attend Clojure/conj by Cognitect. A staple event, this conference features three days of workshops and learning opportunities dedicated to the Clojure language. I really enjoyed seeing Philadelphia for the first time and catching up with fellow Clojurists I hadn’t seen since last year’s event. Not only that, a number of this year’s talks were really visionary and forward-thinking (that ol’ Conj charm). I came away inspired, and wanted to share my five favorite sessions with you. The link on each title takes you directly to a recording of the presentations so you can listen to the entire talk. I hope you find them as inspiring as I did. Onyx: Distributed Computing for Clojure by Michael Drogalis Over the last year or so, Michael has been building Onyx, a fault tolerant platform written for Clojure. Onyx’s data-first design is really refreshing and often time co-incides with a great Clojure library. Debugging with the Scientific Method by Stuart Halloway As always, Stu shared some really insightful ideas in his keynote. This time, he outlined his approach to debugging during his keynote address. Listen to his talk. It’ll give you some food for thought. Om Next by David Nolen Nearly every ClojureScript React library out there tweaks the formula a little, often times adding more to the equation then React itself. Om’s next major version—om.next—looks to be shaking things up even further in the new year. I’m especially interested to see how next will simplify the client-server bridge. Serverless Microservices - Ben Vandgrift & Adam Hunter I’ve had the pleasure of working with Ben in the past and I’m...

What is Simulation Testing?

Over the last few months, quite a few people have asked me to write a bit further explanation about what Simulation Testing even is, and why would you even want to use it. In this post, I'd like to give you a bit more detailed background information on what Simulation Testing is. In the spectrum of testing, there are two primary axis you can categorize approaches by: scope, and level. In scope, the spectrum stretches from example-based tests—like assert(result == true)—to property-based tests (e.g. "For all x > 0, assert pos?(x) returns true"). Property-based tests are more expensive, but provide much stronger guarantees about the behavior under test. In level, the spectrum ranges from white-box tests—which have intimate access to variables, data—to black-box tests—which have knowledge only of externally-visible state. One type of test is not strictly better than the other, but black-box tests are most apt at validating consumer-facing behavior—the behavior your customers are most likely to pass judgement on you via. Black-box tests are also typically more expensive to build and maintain. Plotting these both on a matrix, we can see that Simulation Testing lays on the intersection of both property-based and example-based testing. At its heart, Simulation Testing provides strong guarantees about externally-visible, client behavior. Example-based Property-based White-box Traditional unit tests Generative Tests Black-box Integration tests Simulation Testing Why Simulation Testing? Now, I mentioned both axes increase in cost as they edge toward simulation testing. What kinds of things would necessitate that kind of expenditure? Generally speaking, systems that deserve simulation testing are typically large-scale, production systems, generating hundreds of thousands, if not millions of dollars...

Before you begin; Three crucial models for successfully designing sim tests

One of the hardest parts of Simulation Testing, is knowing how to start. It’s Blank-page Syndrome all over the place. For one, the tooling is new, and a little strange. Harder still, is synthesizing a plan of action for translating knowledge of your system into a coherent simulation test. I struggle with this myself; when faced with designing a test apparatus to exercise an system completely foreign to me, I have a lot of questions; What are the components of the system? How do they interact? How do customers/actors interact with the system, and how do they receive feedback about those actions? Which components do those actions flow through? … and the list goes on and on. How can you get answers to these questions (and more)? Now, I’m no architecture astronaut, but I do appreciate good, focused models and diagrams, especially when they answer a question for me. To that end, when I work with clients new to Simulation Testing, there are three crucial model types I always like to receive/prepare before diving into the bulk of the design. Each of these models answers questions about the system under test, and enables myself and other designers/implementers to get on the same page, with clear, high-level perspective. Note, throughout this article, I’ll be referencing my book Application Architecture for Developers quite heavily (you’ll note a page number every where I do so). To facilitate your own Simulation Testing design and implementation, you can download the relevant excerpts from my book here. As well, I’d like offer you 25% off my book until July 20th. You can pick up the...

How to convince your boss to let you try Simulation Testing

While Simulation Testing has certainly grown in popularity in the last few years, it's by no means a mainstream technology yet. With that comes a slew of concerned, well-meaning developers who'd love to apply it at their workplace, but lack the permission to go off on a wild foray into uncharted territories. In this article, we'll explore a few techniques for broaching the topic of Simulation Testing to your boss or team lead. The Angle The first thing you've got to get a hold on is your angle. Take a look at your company's business. Where is the risk? In a lot of companies, this will be large sources of revenue that stick out like a sore thumb. If it's not revenue, try to think of those hot spots in your system that make the old guard wary. It's not a ghost haunting your application, at least not in the literal sense. These are the places past-experience has taught developers and managers to tread carefully. More than likely, they could share stories with you about that one time someone broke payments, crashed the system, or what have you. What this is all about, is making Simulation Testing a valuable proposition. We need to be realistic with ourselves; the notion of messing around with some new testing technique to no tangible end, is not an attractive proposition. Instead, you should find those hot spots–the places it matters what goes on–and them target for Simulation Testing. Will it Blend? At this point, you need to sit down and work out what you can actually hope to achieve. Do you have the...

5 Tips for Better Simulation Test Actions

Next to dialing in an ideal model for your simulation tests, one of the more challenging things to get right in a simulation test is your simulation’s actions–the bits that do the actual work. Cataloged below are five best practices you should follow when implementing your own actions. 1. Return Fast At first blush, it may not seem to matter a lot how long your Simulant actions take to run. By virtue of Simulant’s implementation, however, it is critically important your actions complete in a timely fashion; every Simulant process completes actions serially, and as such follow-on actions can pile up behind a slow action. At best, this may result in longer run times for your simulation (this may even be unavoidable, if testing a relatively slow service). At worst, slow actions can break time-dependent flows, causing agents running under the same process to block each other, or making analysis more difficult by introducing substantial drift. Suggestion: Avoid slow actions where possible. Understand the implications where it isn’t possible to have always-fast actions. 2. Use Timeouts In addition to worrying about how fast your actions complete, you also need to keep a watch out for actions that never return. Where a slow action might delay agents for a time, actions that fail to return will hang an entire Simulant process, and by proxy, the entire simulation. The most common place you’ll run into this is web service calls. Despite industry best practices shifting towards “always use a timeout,” many Clojure HTTP libraries don’t set a timeout by default. I won’t enumerate each and every library and its configuration options,...

Interview: Paul de Grandis on Simulation Testing

In this posting, I sit down to interview Paul de Grandis of Cognitect to discuss some of the finer points of his experience with Simulation Testing; where he got his start with it, where he’s seen it successfully applied, and when you should consider it for your own systems. Enjoy! Want to learn more? We regularly publish content on simulation testing ranging from making a business case and implementation tips, to helpful libraries, tools, and services. Sign up for our mailing list to be the first to hear about this content....

Conveying State with the Process State Service

One of the first snags simulation test implementers run into is conveying state across an agent’s lifetime. Maybe it’s an authentication token, maybe it’s the ID of an object to interact with further; whatever it is, implementers are often stumped by Simulant’s apparent lack of ability to add smarts like this to running agents. To some degree, this is by design–agent’s actions should generally be pre-planned and isolated. That said, Simulant does, in fact, provide facilities for conveying this kind of information by way of its process state service. In this article, we’ll dive into the do’s & dont’s of process state, as well as one of our own libraries for conveying ephemeral information, sim-ephemeral. What is the Process State Service? First, let’s define the problem. At its root, we need to capture state during the processing of agent actions. Many options were considered in the design of Simulant, but the most flexible and least impactful option ended up being to provide a database per process (one or many agents reside on a single process). This is the Process State Service. If you’re at all familiar with Datomic’s capabilities, you may know it offers transient data stores by way of in-memory databases. This is what the process state service provides; a flexible in-memory database per process with the express purpose of storing agent state. It’s all well and good to have this facility, but how do you use it? With Simulant alone, this is largely a manual endeavour. You’ve got a database, do what you will with it. In practice, however, you’ll often want a few functions to ease...

Introducing sim-template

After this spring’s State of Simulation Testing survey, it became apparent that one of the biggest gaps in Simulation Testing was the lack of information and tutorials on getting started with the technique. To that end, we’ve created sim-template, a Simulant project template that many of the best practices around simulation testing, as well as forming a solid base to build a suite of tests upon. In this article, we’ll walk you through how to use sim-template to create your own Simulant project, what is included in the suite, and how to use it. Creating a Simulant project Before you get started, there are a few prerequisite pieces of software you’ll need installed on your system, namely: Leiningen – A Clojure project management tool sim-template utilizes solely for template instanciation, and Boot – A newer project management tool for Clojure that drives the generated Simulant suite. Docker (quasi-optional) – A containerization/virtualization tool used to run the sample service the generated suite executes against. An active Datomic transactor (optional) - The system of record for a Simulant suite. While you can run the suite in-memory without a transactor while you’re getting your footing, one is necessary for persistent testing and result collection. With the above installed, creating a new Simulant suite is as simple as: $ lein new sim-test org.my-org/sample-sim Once run, you’re ready to get off to the races. Why Boot? You may be wondering, why both Leiningen and boot? It comes down to a mix of preference and capability (or rather, simplicity). First, some background: one of the larger driving goals Homegrown Labs has regarding Simulation Testing is...