Thoughts from JUXT's XT24 Conference

I was fortunate enough to attend and speak at XT24 .  Here are my notes and thoughts from the event. Fran Bennett - Interim Director & Board Member - AI: UNINTENDED CONSEQUENCES Fran made some great points regarding the risks of trusting AI too much.  While some people are worried about super intelligent robots, Fran points out a far more likely set of "unintended consequences" that have already started to materialise. How do you test a system is working?  You need an acceptance criteria.  Developing a set of requirements is a hard to thing to do and it's easier to just... not specify them.  If that's the case, how can you test your system is working? Fran made the comparison of AI products with the recent British Post Office scandal  where people blindly assume "The system is infallible".  Clearly IT systems are not, real life is messy!  We must always keep in mind that the system could fail in weird ways. Human testing AI output also has issues.  It ma

Workflows can encourage bad system design

In this article I'll discuss my thoughts on a potential design flaw relating to systems (primarily microservices) which use a workflow engine.  This issue was discussed in Sam Newman's excellent "Building Microservices" book.  I've lived through this and felt compelled to explain my take on it! TLDR: Using a workflow engine runs the risk that other services called by the workflow are pushed to be data only with CRUD APIs and no behaviour.  Centralised workflow engines that perform all behaviour should be avoided.  Domain driven design should be considered first for your services.  Only then consider small embedded workflow engines as a way of writing the internal code. Why Use a Workflow engine? Workflow engines are a fantastic design choice when the process you are modelling is complex and involves many sequential or parallel tasks.  They can show: non technical users the actual workflow representing the process that can be easy to follow and never drifts from r

Comparing Netflix Conductor's Architecture with Flowable's

EDIT: This blog post took so long that by the time I published I discover that just five days earlier, Netflix announced they " will discontinue maintenance of Conductor OSS on GitHub ".  Here's my post anyway....  I have spent several years developing a service (within a big organisation) which uses the workflow engine  Flowable to orchestrate the complexity of calling many different services.Whilst the project was a success and we saw the benefits of using a work flow engine, we did encounter a number of rather significant issues with Flowable itself ( I'll explain these on another post later ). This leads me to think that I would use a workflow engine again, but not necessarily Flowable.  So what other workflow engines are out there?  Netflix conductor is one which I plan on evaluating in this post.   First - Some background The service I worked on provided a RESTful API to its clients.  On receiving a request it executed a workflow which called many other

Thoughts from Sam Newman's - Microservices Data Decomposition Talk

 I was fortunate to attend a great talk by Sam Newman on the above subject, here is a brief summary of the content and my thoughts. First off, I should say that I went in to the talk regarding myself as "experienced" on the subject of microservices.  I anticipated learning about the more complex "data decomposition" aspects rather than the basics.  However I have to admit that even the fundamental points on microservice were still a benefit for me to hear.  This is likely due to Sam being such a good teacher.  I found myself somewhat re-learning topics which I hadn't anticipated. Microservices and Backwards Compatability Sam described microservices as an architecture style which allow you to partition your functionality into separate services which are independently deployable .  The data that each service stores is hidden within the microservice boundary.  They allow you to scale your organisation since you can have teams working on separate systems in paralle

Thoughts and Learnings from QCon Plus 2020

It has been six years since I last attended a QCon in London . This year I was lucky enough to attend Qcon Plus which (due to Covid) was online only.  I was amazed by how professional this looked and felt.  Right from the start it felt more like watching TV than attending a virtual meeting but they were still able to make it feel interactive.  It was the best it could have been in the circumstances.   The big problem I had with the event was actually more to do with my lack of focus.  Even attending a physical conference it can be hard to switch off from the day to day and focus on the talks which remined me of Tanya Reilly's point in her key note from CLL19 .  I suffered from distractions a lot and I wonder if this may have been made worse by the fact that I knew I could always catch up later.  At a physcial conference - you either see a talk or you don't.  Virtual conferences are all too easy to put off until later when you're busy with work! That said, I did see some gre

Thoughts from day one of CLL19 as a first time speaker

I was fortunate enough to speak at Continuous Lifecycle London (CLL19) this year. It’s the first time I’ve ever spoken at a conference and I found it very rewarding. In the past few weeks, I worked very hard preparing my talk. I had almost expected to be working on my slides solidly right up to the point of speaking. This is probably why I didn’t give much thought to the other talks that I might be able to attend.  On the day, I felt more prepared than expected and was able to watch some really great talks. Here are some of the thoughts from my experience of the day. Welcome message - Joe Fay Joe introduced the event and explained that the committee had reviewed more than two hundred proposals and filtered it down to just thirty nine.  I hadn't realised there had been so many proposals so they just made me feel even more privileged to have been selected.  Around three hundred people were attending. Key Note - Tanya Reilly - Squarespace Tanya's k

Polling vs WebSockets - Part 2 - Stress Testing

In my previous blog post, I discussed the efficiency of polling compared to WebSockets  for a web application.  Using these two different implementations and performance tests, I decided it would be interesting to perform some stress testing to see which solution can handle the most load. All code on github here .  For details of the original problem and performance tests, see here Let's increase the threads until it fails Even with a low number of threads, occasionally I would encounter errors - most likely soon after the server had started in its new Docker container.  Therefore I decided to run each scenario three times and display all results. Scenario - 40 Threads Job duration 0-10 seconds 40 Threads/Users - Instant ramp up Each Thread creating 10 jobs Polling interval of 500ms Timeout: 11 seconds Results - Some Errors from WebSocket implementation Run 1 Polling - 0 errors WebSockets - 0 errors Run 2 Polling - 0 errors WebSockets - 2 errors