Sunday, December 03, 2017

DDD Observers

Observers automatically run code when some condition occurs, without the causing code having to care.

Even in an imperative, object oriented environment, we'd like to support "birds sing when the sun comes up" without having to change the code for the sun coming up.

Theoretically, we could do this with simple polling. We could just explicitly check the condition every interval. E.g. is the sun up yet? Five seconds later: is the sun up now? Five second after that: what about now? This is not elegant, and could be costly, and completely falls down when we need to poll many objects.

We can implement observers cleanly by wrapping our entities!

Instead of triggering on changes to the internal state of an entity, using  "properties" as Kotlin does for example, we can wrap our entities in decorators that intercept calls from client code. Instead of introducing a new event abstraction, we can allow triggered code to compare the visible state of the entity before and after the triggering method.

Even a seemingly innocuous "event observer" class ruins the entity’s nice obliviousness. We step onto the slippery slope of adding notification features because some client might be interested. We shouldn't let our observing clients couple to our commanding clients, nor should we introduce a completely new abstraction in between. Observing clients are really interested in the change of an existing observable field, though not all clients are interested in the same changes.

The decorator solution does require calling code to know about the observer, usually at creation time. For all of the practical use cases for which I've needed observers, it's been sufficient to set them up at creation time. The calling code could be a factory, or even better a repository that wraps the entity at persist time.

This use of observers creates a nice parallel to "Collection Oriented Repositories" as discussed by Vernon in Implementing Domain Driven Design. That is, we explicitly wire our entities once, and then we don't need to subsequently worry about persistence and publishing every time the entity changes. We might even be able to leverage observers to support a Collection Oriented API on top of a Persistence Oriented Repository, and still avoid the morass of the full object relational mapping.

One last important point: be careful not to overuse the observer pattern. By definition, it hides the implementation. This can make it hard to figure out what the system is doing. In particular, never let your observing code make changes to your observed objects. Even if you're not worried about infinite loops, this is wrong because the observed code can't actually be oblivious.

Without further ado, here's the java implementation of this properly decoupled observer. It uses annotation processing to generate code to support observers of this interface:
public interface Observer<T, V> {
 default V beforeChange(T t) {
  return null;
 };
 void afterChange(T t, V v);
}
Which may be wired simply by:
 YourEntity entity = new YourEntityImpl();
 entity = new YourEntityObservable(entity, yourObserver);


You could implement this observable decorator manually. It’s kind of boring, so you could instead generate it using something like the linked project above.

Monday, May 29, 2017

DDD Transactions

The last post ended on a bit of cliffhanger. How do we leverage the compiler to validate before making changes to the domain? We’d like to make it easy to follow this pattern:
  1. Build all new objects, setting fields directly
  2. Validate, and if everything is valid then…
  3. Call the domain methods
The simplest thing is to hide our setters behind an interface. We’ll actually have two interfaces: one "uninitialized interface" for newly created objects, with just the setters; and one "domain interface" with just the proper domain methods. The uninitialized interface prevents taking any action until we’ve validated, and the domain interface encapsulates state. A factory returns an instance of the first interface (setters), and the validating method on the object itself returns the second interface (proper domain methods).

public class ItineraryImpl implements Itinerary, UninitializedItinerary {
  ...
}

public interface Itinerary {
  List getLegs();
  boolean isExpected(HandlingEvent event);
}

public interface UninitializedItinerary {
  void setLegs(List legs);
  Itinerary validate(ValidationResults r);
}

public class Cargo {
  public void assignToRoute(Itinerary itinerary) {
  ...


The next step is to support composing validation for multiple objects. We can do this with a simple local transaction class, used like this for example:

UninitializedItinerary itinerary = itineraryFactory.create();
itinerary.setLegs(...);
txn.with(itinerary).add(i->cargo.assignToRoute(i));
if (txn.isValid()) {
  txn.commit();
} else {
  reject(txn.getValidationResults());
}

With the validation approach described in the previous post, support for these transactions is simple and easy.
Within the domain, we use only domain interfaces. We use the transaction class to convert from uninitialized interfaces to domain interfaces. Especially with Java 8 lambda expressions, it's easy to defer actions until after validation. For example, the "cargo.assignToRoute(i)" call above does not run until and unless all validation for the transaction has succeeded.

Using his approach, it's hard to accidentally use an object before it's been initialized. For example, an unadorned:
  cargo.assignToRoute(itinerary);
doesn't compile. Nor does an attempt to modify the private state of an already initialized object:
  cargo.itinerary().setLegs(null);

"Enrichment" or "defaulting" has exactly the same challenge as validation. In fact, any calculation that is both validated and then applied to the domain has the same challenge. We want neither the entity nor clients of the domain to care about defaulting logic. The solution is the same: wire up defaulting services in the object factory and let the transaction wiring ensure that defaulting, as well as validation, is applied at the right time.

These transactions are like local database transactions. Instead of making changes immediately and performing a "rollback" if those changes are not actually valid, DDD transactions validate first and only then proceed to make changes.

This is how to do transactions for Event Sourcing or Prevalent Systems. Enrich and validate whole objects, and use the type system to ensure that comes first.

Friday, May 19, 2017

DDD Validation

How should we implement validation for Domain Driven Design?

Our first thought is to put validation in the entity. After all, entities are in charge of maintaining their invariants. They are the heart of our business logic, and validation can also be pretty fundamental to our business logic.

One disadvantage of validating in the entity is that the entity can grow too large. Another disadvantage is that validations often require access to services, so putting those in the entity is not Clean Architecture. The third disadvantage is that entity methods do not naturally provide a powerful enough API. Throwing exceptions does not reasonably handle results for more than a single validation. It's too easy for calling code to neglect to properly call pairs of methods for validation or to check method return values, and these calling conventions detract from the proper focus of an entity.

E.g.

class Creature {
 public void eat(Snack v) {...} //invariant maintained using types
 private void setBloodSugarLevel(int i) {...} //invariant maintained privately
 public void eatIfValid1(Object o) throws RuntimeException() {...} //no
 public void eatIfValid2(Object o) throws EatingException() {...} //no
 public ValidationResults eatIfValid3(Object o) {...} //no
 public ValidationResults validateEat(Object o) {...} //no
}

The rest of this post describes a different approach to validation, which solves these problems.

We code each validation in a class by itself, thereby satisfying the Single Reponsibility Principle. All the validations of an object should implement a common interface. Two interfaces are better than one here; use one interface for indicating that the object is invalid, and another for providing information regarding how or why it's invalid (SRP again). Not only does this help share code for generating validation results, it also causes your code to be cleaner for the cases in which the result is specific to the validation.

E.g.

@Singleton
class ComfySnackValidation implements Predicate, Function {
 @Inject
 WeatherService weather;

 public boolean test(Snack snack) {
  int temperature = weather.getCurrentTemperatureInFarenheit();
  if (temperature < 68 || 78 < temperature)
   return true;
 }

 public ValidationResult apply(Snack snack) {
  return new ValidationResult(getClass().getSimpleName());
 }
}

There are two important aspects to this approach:
1) we validate whole objects and not individual method calls, and
2) we allow creating invalid objects.

Validating anything other than whole objects requires one of inelegant APIs mentioned above. Validating only whole objects enables us to leverage the type checker, as we'll see in the next post. The objects that we validate may be entities or value objects. They may be "command objects", that exist solely to serve as arguments to a single method. Often, the object needs a reference to another object which is already valid and persisted. This is fine, so long as nothing in the persistent object graph yet refers back to the new object, the object which is not yet known to be valid.

Creating invalid objects is especially compelling in Java, which doesn't yet support named parameters, and for which entity builders can be challenging. Even in languages which do support named parameters, we often want to use the actual object before we know it's valid, consulting it in defaulting and validation logic. We may even want to publish invalid objects, and it’s better to not have two different code paths for publishing the same fields.

We can achieve “correctness by construction”; there should be no reasonable way to call the domain incorrectly. We can achieve this without the entities having to know about each validation. The essence of the design is that a factory injects a collection of validating services into the object to be validated.

e.g.

@Singleton
public class SnackFactory {
  private Validator validator = new Validator();

  @Inject setComfyValidation(ComfySnackValidation v) {
    validator.add(v);
  }

  ...other validations to inject...

  public Snack create() {
    return new Snack(validator);
  }
}

With a small generic ValidatorImpl, the boilerplate that we need in the validated object is small:

e.g.

class SnackImpl implements Snack {
 private Validator validator;

 public Snack(Validator validator) {
  this.validator = validator;
 }

 public Snack validate(ValidationResults results) {
  return validator.validate(this, results);
 }
}

Here is an example of a generic validator to support.

Next post will discuss how the type checking works.

Thursday, May 18, 2017

DDD Entities

In Domain Driven Design, it's all about the entities.

Entities are the things in your software users' mental model that change.

In Clean Architecture, your entities are independent of all of the rest of your software.

All the rest of your software is defined mostly in relation to entities. Repositories are collections of entities. And nothing else in DDD software changes over time.

Entities are the genuine objects of Object Oriented Programming.

Your software should only change entities by calling their methods, and never by directly modifying their internal state.

E.g. animal.eat(grass) and not animal.setBloodSugarLevel(100)

Thursday, May 11, 2017

High level code, great performance

https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/destination-passing-style-1.pdf

Exciting!

Sunday, March 19, 2017

Slicing the cake and different products

The concept of "slicing the cake" is one of the most important lessons to come out of the agile movement. People consistently neglect to apply it across products...

One solution is to have a single developer make all the changes across products. This makes sense if all the products use the same infrastructure, or all have high standards of documentation and devops. E.g. it doesn't work well if different products have different build processes that all require manual intervention. When we reduce barriers to entry this way, "ownership" of individual products might then be meaningful only for code reviews and maintaining long term health of each code base.

The only other solution is to have developers on each of the products all working together to integrate early. If your sprints are two weeks long, each developer gets one week to build an initial implementation, and must be available to integrate immediately on the first day of the second week. Everyone should expect developers to refactor their initial implementations afterwards.

Sources of technical debt

All coding smells get worse over time until they are corrected.

For example, an awkward abstraction might not seem so bad when it is introduced, but as more features are added, it gets increasingly brittle and creates more bugs.

In practical software development these are the most common sources of technical debt:

YAGNI (You aren't gonna need it.)

People build things that don't end up being needed at all, or more insidiously people build things that have an overall cost which exceeds their benefit.

Adapting

Existing code is organized into two parts A and B, and a new use case comes along that needs mostly B. That is, the interface to use B is awkward. Rather than improve that interface, people have their code C start using B via an extra piece of code, a B-C adapter.

Adapters are necessary when the adapted code is owned by a different team, but in that case you'd hope that the interface is well designed in the first place, or at least the integration is part of the application's job. When all the code is owned by a single team, adapters are just debt.

Special casing

A change is desired for a single specific case. The change could apply more generally, but it isn't urgent. People add new code that runs just for the specific case, because they are afraid.

This source of technical debt is particularly tempting. And sometimes it's hard to distinguish from properly avoiding YAGNI. Just as with all refactoring, good automated tests are essential.

Wednesday, February 08, 2017

Schmoperties

Schmoperties is the best two java configuration APIs.

Monday, January 16, 2017

In praise of kafka

Messaging is great because it reduces coupling.

Kakfa does it even better.

A message consumer can come up and work just fine after being down all week.

Cool unhyped languages

Pony - the best of low level and high level
Zig - more pragmatic than c
Lamdu - types are friendly and easy
Crema - program without turing completeness