< Hermes Monteiro />

{Code, Architecture}

Integration Tests VS Unit Tests

Many people ask me about this comparison and even about the proportion of each type of tests.

I faced several situations when I needed to evaluation the best approach and of course is not like black and white situation, but I personally defend that should always favor the Unit Tests.

I compiled a comparison table about the two types of tests to help us to brainstorming about it.

Integration Tests VS Unit Tests

Integration TestUnit Test
Difficult to maintainEasy to maintain
Usually don´t test every possible pathUsually test every method parameters combinations and paths
Difficult troubleshooting (many objects, many layers, many external resources are involved)Straightforward to identity the problem and to fix (test of single unit of code)
Can take long time to executeIs fast to execute
Parallel execution could causes problemsAre executed in parallel with no problems (if well done)
Can be executed against any kind of implementationForce us to improve the code to make it testable.
Complex to setupEasy to setup
Can give false-positive result (since it’s a black box test we can get correct result without knowing exactly how this result was generated – could be fixed or manipulated data i.e)We are testing a small piece of code and we can perform multiple types of validation not only against the result but how it’s generated

So why/when we need Integration Tests

Have more and good unit tests will ensure more product quality

So why we need and why we have Integration Tests

The integration test provide us a test of integration parts like:

  • contracts
  • connections (databases, caching system, message queue…)
  • application bootstrap
  • infrastructure aspects (protocols, access control, authentication…)

How many integration tests we need to test this “integrations”?

Just a few tests. Not for all functionalities or use cases.

That’s why we should have much more unit tests (good tests with high code coverage)

Code Smells: Primitive Obsession

What is, why is bad and how to prevent

The Code Smells, a symptom of our weakness. Seems to be faster than do the good option, but as always, the bad code slow down the team and makes we going slower.

How to know if you are creating a Primitive Obsession Code Smell?

  • A primitive should never travel “naked” around the code.

When I say “naked” I mean: alone, opened, outside a class

Even for the most simple code or method we may think that would not be a problem to have this “data” traveling. But “the problem we don’t see today probably we will face tomorrow”

On this code, for instance, I decide to receive a Phone Number through a primitive variable as a parameter:

void AddPhone(string phoneNumber)
{
     //adding phone
}

We can predict a many of problems from this code as:

  • How we can insure that the phone number is valid?
  • What if our system works with multiple countries with different phone numbers formats?
  • What if we need to add more related information like zone code, country code and so on?
  • What if our system need to validate that phone number is mandatory?
  • What if we need to control phone types?

Primitive constants to control information is another case:

const int PHONE_TYPE_CELLPHONE = 1;
const int PHONE_TYPE_HOME = 1;

void AddPhone(string phoneNumber, int phoneType)
{
     //validating by phone type
     //adding phone
}

What is the problem?

The most important issues are about Consistency and Scalability. Why?

  • If we use primitives and need consistency, we’ll need to add validations all over the place to guarantee that the data is correct all the time
  • If we have more than one related data (like phoneNumber and phoneType) the consistency will be much more difficult to achieve (see Data Clumps code smell)
  • Using primitives we can’t scale, won’t be easy to add new features, validations or data.
  • Probably use primitives will drive us to add more and more

Reasoning

There is no problem at all by creating a small class, even with just one attribute, property or field. This will allow you to scale, reuse and keep your software good inside.

If we model the system using objects from beginning the cost will be minimal and we’ll have a scalable, consistent and healthy software.

RabbitMQ clustering + HA + Swarm

Hello and happy deploying!

On this article I will share how I prepare a HA message broker with RabbitMQ, HAProxy and Docker Swarm.

Our goal is to have a message broker architecture resilient and highly available. Also we want do prevent message loss.

If you want to follow this in your machine, download the source code from: https://github.com/hermesmonteiro/rabbitmq_ha

This would be our final result:

1 – Prepare HAProxy image

The HAProxy is a lightweight free open-source high availability load balancer and proxy server.

We will prepare the HAProxy image using a dockerfile (located in .\haproxy).

This is a very simple dockerfile to start. We will use to copy our custom configuration file haproxy.cfg (also located i n .\haproxy).

Our haproxy.cfg is very simple too. We can talk about HAProxy options later.

The most important configurations are the mapping between the listening ports to target servers. We are mapping 8082 port to RabbitMQ 5672 ports on 3 nodes, and the 8083 port to RabbitMQ 15672 ports.

At the end we configure the stats endpoint so we can check the HAProxy statistics.

global
    maxconn 4096

defaults
    timeout connect 60s
    timeout client 60s
    timeout server 60s

listen rabbitmq
    bind *:8082
    balance roundrobin
    server rabbitmq1 rabbitmq1:5672 check inter 1000 fall 3
    server rabbitmq2 rabbitmq2:5672 check inter 1000 fall 3
    server rabbitmq3 rabbitmq3:5672 check inter 1000 fall 3

listen rabbitmq-ui
    bind *:8083
    mode tcp
    balance roundrobin
    server rabbitmq1 rabbitmq1:15672 check inter 1000 fall 3
    server rabbitmq2 rabbitmq2:15672 check inter 1000 fall 3
    server rabbitmq3 rabbitmq3:15672 check inter 1000 fall 3

listen stats 
    bind *:1936
    mode http
    stats enable
    stats hide-version
    stats realm Haproxy\ Statistics
    stats uri /
    stats auth admin:admin

On dockerfile we create the image based on HAProxy official image and just copy the config file to specific directory inside the container.

FROM haproxy

COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

2 – Prepare RabbitMQ image

RabbitMQ is a widely used open source message broker.

We will prepare the RabbitMQ image using a dockerfile (located in .\rabbitmq).

This dockerfile is also very simple. We use just to copy the configuration files and enable the plugins we want.

We modify the main config file rabbitmq.config (located in .\rabbitmq), to indicate the RabbitMQ to load the “definitions” from specific file: our definitions.json

[
  {rabbit, [
    {loopback_users, []}
  ]},
  {rabbitmq_management, [
    {load_definitions, "/etc/rabbitmq/definitions.json"}
  ]}
].

On our definitions.json we have all configuration we need. It’s a bit more complicated.

Now the most important configurations are: queues, exchanges and bindings.

But we can configure un RabbitMQ UI and export the definitions as well.

{
   "rabbit_version":"3.8.19",
   "rabbitmq_version":"3.8.19",
   "product_name":"RabbitMQ",
   "product_version":"3.8.19",
   "users":[
      {
         "name":"guest",
         "password_hash":"+EeUEEI/0NQvMwPrp/cqpZ9nBE1V04Z0l4Z62Stxis6tmnBr",
         "hashing_algorithm":"rabbit_password_hashing_sha256",
         "tags":"administrator",
         "limits":{
            
         }
      }
   ],
   "vhosts":[
      {
         "name":"/"
      }
   ],
   "permissions":[
      {
         "user":"guest",
         "vhost":"/",
         "configure":".*",
         "write":".*",
         "read":".*"
      }
   ],
   "topic_permissions":[
      
   ],
   "parameters":[
      
   ],
   "global_parameters":[
      {
         "name":"internal_cluster_id",
         "value":"rabbitmq-cluster-id-sCe03Vcr5buS4w-8iX6t_Q"
      }
   ],
   "policies":[
      
   ],
   "queues":[
      {
         "name":"MyQueue",
         "vhost":"/",
         "durable":true,
         "auto_delete":false,
         "arguments":{
            "x-queue-type":"classic"
         }
      }
   ],
   "exchanges":[
      {
         "name":"MyTopicExchange",
         "vhost":"/",
         "type":"topic",
         "durable":true,
         "auto_delete":false,
         "internal":false,
         "arguments":{
            
         }
      }
   ],
   "bindings":[
      {
         "source":"MyTopicExchange",
         "vhost":"/",
         "destination":"MyQueue",
         "destination_type":"queue",
         "routing_key":"MyTag",
         "arguments":{
            
         }
      }
   ]
}

On dockerfile we create the image based on RabbitMQ official image, add our custom files and activate prometheus plugin.

FROM rabbitmq:3-management

ADD rabbitmq.config /etc/rabbitmq/
ADD definitions.json /etc/rabbitmq/

RUN rabbitmq-plugins enable rabbitmq_prometheus

3 – Composing

I separated the process in two composer files because I want a Swarm for HAProxy nodes but not for RabbitMQ nodes.

RabbitMQ Compose

Relevant facts about this RabbitMQ compose file (rabbit_docker-compose.yml):

  • The image build is commented so I generate the image separately. But we could generate the image from the compose.
  • We are generating 3 “nodes” of RabbitMQ but could be more
  • All nodes is sharing the same volume. This way we can use Persistent messages and we prevent loss message if the node is down.
  • We are using External network, sharing with HAProxy nodes
  • We are not mapping/exposing ports, because we will access RabbitMQ through HAProxy
version: '4'
services:
  rabbitmq1:
    container_name: rabbitmq1
    image: rabbitmq-cluster-base
    #build:
    #    context: ./rabbitmq_base
    #    dockerfile: dockerfile  
    restart: always
    environment:
      - TZ=UTC
    hostname: rabbitmq1  
    volumes:
      - ./data:/var/lib/rabbitmq/mnesia     
      
  rabbitmq2:
    container_name: rabbitmq2
    image: rabbitmq-cluster-base
    #build:
    #    context: ./rabbitmq_base
    #    dockerfile: dockerfile      
    restart: always
    environment:
      - TZ=UTC
    hostname: rabbitmq2    
    volumes:
      - ./data:/var/lib/rabbitmq/mnesia      
      
  rabbitmq3:
    container_name: rabbitmq3
    image: rabbitmq-cluster-base
    #build:
    #    context: ./rabbitmq_base
    #    dockerfile: dockerfile          
    restart: always
    environment:
      - TZ=UTC
    hostname: rabbitmq3    
    volumes:
      - ./data:/var/lib/rabbitmq/mnesia            

networks:
  default:
    external: true
    name: rabbitHA_network
    
volumes:
  data:

HAProxy Compose

Relevant facts about this HAProxy compose file (haproxy_docker-compose.yml):

  • The image build is commented so I generate the image separately. But we could generate the image from the compose.
  • We are only configuring 3 nodes in Swarm service, but there are much more configuration to add
  • We are mapping 3 ports:
    • 1936 for statistics
    • 8083 for RabbitMQ UI
    • 8082 for RabbitMQ Queues (for producers and consumers)
  • We are using External network, sharing with RabbitMQ nodes
Notice that the mapped ports match with listening ports from HAProxy config file
version: '3.8'
services:
  
  haproxy:
    image: haproxy-base
    #build:
    #  context: ./haproxy
    #  dockerfile: dockerfile    
    hostname: haproxy
    volumes: 
      - ./tmp/data:/data    
    deploy:
      replicas: 3
    ports:
      - "1936:1936"
      - "8083:8083"
      - "8082:8082"    

networks:
  default:
    external: true
    name: rabbitHA_network

4 – The execution script

I use Windows 10 with Docker. So I’ve created a .BAT script; but could easily converted in a bash script.

  • We initialize the Swarm.
    • If is already started no problem. You just need to join the existing swarm
  • We create the external network, con Swarm scope and Attachable
    • The Attachable option allows share the network between Swarm and standalone containers.
  • We build the RabbitMQ image
  • We run the RabbitMQ compose
  • We create the RabbitMQ cluster
  • We deploy the HAProxy stack
docker swarm init

docker network create --scope=swarm --attachable rabbitHA_network

docker image  build --no-cache -t rabbitmq-cluster-base C:\HM\rabbit_cluster\rabbitmq

docker compose -f C:\HM\rabbit_cluster\rabbit_docker-compose.yml up --force-recreate -d

TIMEOUT 3

docker exec rabbitmq1 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl start_app"

TIMEOUT 3

docker exec rabbitmq2 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl join_cluster rabbit@rabbitmq1; rabbitmqctl start_app"

docker exec rabbitmq3 sh -c "rabbitmqctl stop_app; rabbitmqctl reset; rabbitmqctl join_cluster rabbit@rabbitmq1; rabbitmqctl start_app"

docker stack deploy -c C:\HM\rabbit_cluster\haproxy_docker-compose.yml haproxyStack

pause

5 – The result

After the execution we should be able to open RabbitMQ UI using the HAProxy port.

We can see the RabbitMQ nodes configured on the cluster

We can even see the HAProxy working if we fast hit F5, the “Cluster”, on right upper corner, will change between the three nodes.

The message broker is ready to receive messages.

If we open the stats port we can see the HAProxy statistcs

Building high performance teams

Hello,

Working with a high performance team is good for everyone, not only for the business. So this is something we should try to achieve. In fact, the people suffer if they are not managed properly to get must of each one.

I decide to write this post trying to help explaining how I achieve this. And that is totally possible if all people AND company are keen to it.

So let’s check my 6 steps do build a high performance team:

1 – Get the right crew

Pretty obvious and usually is something we are always trying to get.

But sometimes, we already have the right people but the problem is that we don’t know how to work with them. So, before think the people is not who you need, these actions can help to get the most of each one:

  • Make sure you know very well the soft and hard skills
  • Understand what motivates each one
  • Try to assign tasks aligned with people expectation
  • Gives equal opportunities to everyone: to talk, to ask, to decide, to act
  • Try to organize the team and the project to get the most according to the skills and abilities of each one

2 – Build the team

Although the team is composed of by several persons, the team must have uniqueness in many ways.

That means all crew must be totally aligned. Walking to the same direction, looking for the same goals. This actions could help you to “build the team”:

  • Share a clear vision of the project
  • Present the project challenges and risks
  • Make sure everyone understand what are they working for (at the company/business level)
  • Care about the harmony among the members
  • Create connections between the people (a Mentor/Pair system is very good for this)

3 – Care about the people

People work better if they are happy and comfortable.

No, we can’t solve all people problems, but if our company and us care about this and do actions to demonstrate it, this will reflect on people’s work.

The “job” and “go to work” should not be “one more problem” to anyone.

Others important aspects to care about: people confidence, technical preparation, expectations alignment, trust, freedom and autonomy.

This will provide smooth and enjoyable working hours to the team.

4 – Measure team’s performance

Of course, we need to know how our performance improvements are going.

Some metrics could be useful to measure the performance, detect problems and take actions.

Delivery Metric

Lead Time is the best metric to evaluate the team’s performance. Lead Time represents the time used from feature request to feature implementation in production.

The Lead Time can help to identity blocking situations, technical gaps, communication and process problems and, of course, team’s maturity level.

Quality Metric

Technical Debit could perfectly help us to evaluate the delivery technical quality. Make technical debit visible and analyse the backlog.

QA’s results and metrics can help to evaluate the delivery product quality.

5 – Keep the house organized

Bad organization causes waste of time, stress and disagreements.

We need to keep all cleaned and organized: tools, information, tasks, meetings, procedures…

  • Use a good project management tool, like Jira
  • Keep information centralized, organized and available for everyone
  • Know and apply best practices of the tool and user stories
  • Have a clear and well defined roles and responsibilities
  • Control and avoid interruptions toward the team members
  • Promote the knowledge sharing and transfer among the people

6 – Define short circles and milestones

The team must feel their job is important.

Even if we don’t use any Agile methodology, or even in a long term project, is important do define short life circles with start and end.

This will give the a periodic “mission accomplished” feeling. (Scrum Sprints are a perfect way to do this)

Although we have sprints or short circles. Some relevant/important milestones are crucial to the team healthy. For instance: releases, partial deliveries, third party integrations, new product releases, a specific numbers of users achieved, a specific sales amount achieved, a big technical improvement.

Define those milestones, get there as a team and celebrate. On each achieved milestone the team will get stronger.

Microservices – How Conway’s law affects you

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure”

Melvin E. Conway 1967

As many other cases, although was a long time ago, the law today is still valid.

Basically: organization structure equals system design.

Real life example

An organization based on the division of specialized departments like Front-end, Back-end, DBA, Infrastructure and so on probably will design and deliver systems the same way.

That means each department will work on their area and deliver they “part” of the system.

Work like that requires a lot of control, follow-up and synchronization. And creates a strong dependency between these departments.

About companies

Nowadays many companies want to migrate to microservices and many times they think that is just about break they systems into “small” services.

But before touching the code is necessary important changes on company’s mindset and organization to achieve independent teams with Ownership and Responsibility.

If the company is not keen to change (or adapt) won’t work with microservices and won’t get the real benefits from microservices. Maybe will have a lot of “small services” but not microservices in the essence. (see Microservices Manifesto)

To get there, the company needs (to get started):

  • Dissolve those specialized isolated teams/departments
  • Build cross-functional teams
  • Eliminate bureaucracy
  • Define the strategy to adopt (ci/cd, testing, accessibility, observability, scalability…)

About me and you

Most of engineers want to work with microservices. And also is getting prepared to work with it.

So how the Conway’s law affect us?

If we expect to be happy working with microservices but our company is not really doing the necessary changes, at the end we will be just working with a lot of “small services” on the worst possible scenario.

If you are looking for a job to work with microservices, try do know about the company during the interviews and find out if the company is really ready or getting ready to work with microservices. Some specific questions could help:

  • How the teams are organized/divided?
    • Traditional divisions by “technology” (front, back, database…) — Is Bad
    • Division by business area, or by service(s) — Is Good

  • What is the teams average size? (of course depending on the company size)
    • Large teams — Is Bad
    • Small teams — Is Good

  • Who or which team is usually responsible for the deployment?
    • Another specific team or person deploy — Is Bad
    • The team deploys and scale its own service — Is Good

  • How many teams work in one single service?
    • Multiple teams can change or work on a single service — Is Bad
    • A single service is maintained only by one team — Is Good

  • If a team detects the need to change something in their service – like the database type for instance – can the team do it? or who should the team talk to?
    • The team doesn’t have autonomy and has to ask another team do the change — Is Bad
    • The team is autonomous; decide and execute change on its own service — Is Good

Conclusion

Conway’s law alerts us that if we are interested in work with some methodology, technology or pattern, maybe won’t be possible if the company’s structure/organization is not compatible.

Agile / Scrum and Microservices are a good example of cases that are only possible if the company changes its mindset and organization.