Two true stories – several conclusions

This time I would like to open with two different stories and take the conclusions out of it.

As you all know I’m involved in many software projects this days and learn a lot from them.

In one of my projects one of the team leaders, a Java veteran, complained that the data scientists in the team liked to work and code their ideas in Python. He claims that python is compiler free and hard to manage code and he don’t like it.

Image result for python language

It’s clear why data scientists use Python. It contains comprehensive tools and libraries for data and NLP manipulation. It’s clear why Java engineer don’t like it. Python is very fast to write language but hard to debug and not safe enough from engineering perspective.

The second story is about myself. In the last year I’m hearing a lot about microservices. Moving from monoliths, N-tier applications, from web services to small microservices architecture.

I tend to ignore this trend as it seems to me like yet another buzz word, calling same SOA -> WebServices -> REST architecture microservices.

Image result for microservices

Then I realized there is symbiotic connection between the stories.

The reason why micro-services is the write approach is the same reason why the data scientist can and should use Python while the team leader, engineering member can and should use Java or any Object oriented structured programing language.


Micro-services give the software development teams the flexibility to use in the same solution and same product the best tool or language for specific use.

It also enables loose coupling between components, and fail safe in case one of the components failed or break.

Image result for open minded

From this story I came to at least two conclusions. Everything has it proper use in the right time and need.

You should be open to other member’s ideas and be able to be a micro-service like by yourself to get the best from tools and best practices from everyone.


And at the end, use new paradigm and technology wisely to gain best results.

.net vs. Java – take II

In my latest post I dealt with differences between .net and Java developers mainly from 
the holistic and passion perspective.
This time I would like to spot on another major difference.

As I mentioned I’m coming from the Java world. Last week I presented a session about 
Elastic Search. The session was for .net developers and while running command line 
activities I started to see some puzzled eyes all around and some of the developers lost 
I checked out why it was like this after the session and came to the conclusion that .net 
/ Microsoft developers feel much more confident with Windows, GUI based, mouse based 
activities rather than command line – Linux like activities.

I learned another important conclusion that although Java and .net code is similar the 
type of developers are really differ from several perspective. GUI vs. command-line is 
another aspect.

.net vs Java – the holistic aspect

Recently I’ve been exposed to .net and C# development and developers.

As a Java veteran it’s very interesting exposure.

Although from code perspective the code is very similar and most of the classes and capabilities are the same, from loyalty and developers attitude it’s totally different.

Back than @2001-2005 while Java was fresh and attractive, everyone that used Java seems like Java evangelist who wants everybody to use Java which is the best language at the world.

Sun was innovative company by exposing Java as open source.

Than came Oracle and acquired Sun and Java. From that on wards Java developers are no longer heroes with Open Source proud. They are like any other developer.

In the contrary almost every .net and C# developer is totally related and inspired by Microsoft activity.
He’s Windows groupy and learns every piece of news related to his code and C#.

It’s not only my perspective. You can see it in every SW journal which compares between Java and C# – there is much more demand for C# those days. And this is before making C# completely multi-platform.

I can’t explain the situation since at the end Microsoft is a huge ugly company and still developers liked it.

Maybe some of you can assist with identify how Microsoft are doing it and help Java developers to become more proud.


Everyone looking into your wallet

The three giants companies that are running the internet today are looking for revenue.

Each company is looking for it from their expertise domain.

Google wants to use their search capabilities and leverage it to suggest short cycle selling cycle – Google buy button

Facebook suggests the messenger for small business and Apple are yet looking into their best way to make extra revenue on top of their lovely products, one of the ways is to use SIRI as recommendation engine that will advertise products.

The trend is clear and it’s message to everyone in the industry that want to do business:

The current revenue model from products and services as linear approach is not sufficient.

Advertising itself can’t be the engine beyond such conglomerates.


In order to go to the next level, companies need to use their data and knowledge and transform it to real action.

Actionable knowledge not only in terms of decision making but in terms of money making.

What do you think?

Happy new year


Functional or Non-functional this is the question

As a developers we want all the time to write new code and new functionality.

Even while we’re doing tests and writing unit tests or component tests we’re always think positive. Thinking how the flow should look like and imitate this activity.

This is OK for 80% of the flows however the evil is in the details.

While going to ensure app especially SaaS app for production readiness you should consider negative scenarios.

Negative scenarios split in minimum to two sections:

  1. Negative functional scenarios
  2. Non functional scenarios

Non functional scenarios and tests

These type of tests contains the following:

  1. Requirement for non functional readiness –
    Expected load, stress scenarios, unexpected infrastructure and application behavior.
  2. Scalability tests
  3. Availability tests

While understanding the needs and the extreme use cases that the system should function under it’s much easier to write testing scenarios.

Imitating such extreme use cases might be harder to implement and in most cases it’s contain relatively large hardware and human resources allocation.

For such tests there are bunch of tools, free and commercial that can help you imitate such load.

One of the famous tools is JMETER 

JMETER can imitate APIs calls and can sample web activity easily.

To simulate High availability and fail-over scenarios you’ll need to use your own scripts that will simulate proactive crashes of application and OS.

Another unique approach that can help you a log it entering hooks into your code in such a way you could fine-grain the tests to function and method level.

To enable this you can use Aspect Oriented Programming in general and AspectJ to implement it in Java.

Another important action here is to measure the system from resources perspective during the tests and also to test functional behavior and auditing of data correction along the tests.

As rule of thumb such tests might take longer to run and manage since you would like to simulate stability activity of the system.

In extreme cases such test can take 14 days to run and to get fair results.

In agile and CI methodology on mature software such tests should run continuously and enable test of smooth upgrade as well.


We’ll discuss upgrade procedures and best practices in one of our upcoming posts.

Enjoy your week,


Robots, ChatBots, NLP and NLU – What is all about?

We’re all familiar with ‘virtual assistant’ chats that appears in most of the commercial sites.

You can see examples below:

The common denominator is that all of them are based on pre-defined set of automated rules that are trying to guide you through predefined path in order to answer you or promote some sales activities.

Those bots can’t really understand your questions and solve the issue but redirect it to relevant agent or line of business instead of IVR menu.

The root cause for this is the limitation of techniques such NLP and NLU from understanding the real intent and the flow and context of the conversation.

NLP – stands for Natural Language Processing. Started around 1950 and until today there is no much progress with that.

The main challenge is to bring the right answer on a specific domain with limited knowledge base.

NLU – Stands for Natural Language Understanding and the difference between NLP and NLU is mainly that NLU is sub domain which aims to deal with the understanding part of the language and not in the processing itself.

Many companies and software techniques aims to solve this issue by tackling part of the problem such as: Identify part of speech, Speech recognition and Linguistic processing.

Although computing power grow exponentially the NL domain didn’t have such a break and the growth is almost linear.

In order to improve the NL results many companies are ‘helping’ the NL technique by adding some heuristics such as knowledge base and pre-defined definitions and limitations. This can help the vendor on specific domains and as first answer for a customer but not more than that.

In order to make a break through there are two major factors that need to be break:

1. Unlimited unique data with various domains and with concrete answers

2. Some feedback from human activity that can help with A/B testing like approach

Will give some examples and more details in one of my upcoming posts.


Have great week,


SaaS – the hidden advantage

I assume that all of us are familiar with SaaS (Software as a Service) solution.

While moving or building new software in SaaS fashion there are many considerations regarding architecture and software development.

It starts from the need to be able to support high availability and distributed system from the beginning and ends up by decoupling front end from server side.

Many organizations and software products should do complete mind shift and re-write of their code while moving from on-premise solution to SaaS one.

Apart from code changes it requires also mind shift in DevOps and build operations. Continues Integration (CI) is a must in SaaS and gradual deployment is key factor.

I found out that one of the biggest advantages of SaaS is the maintenance and upgrade flow.

In most of the cases product lead company has tens or hundreds of companies that requires maintenance and support for their product.

It’s obvious that each set of customers have specific product release differ from other customers.

Maintaining tens of different releases and sub-releases on many customers is in most of the cases night mare.

It’s taking time to release the specific software, debug and fix each and every branch.

While implementing SaaS with the right DevOps and CI it’s enable you full control on all customers.

As a manager you can decide which customers will get the new release and be served as ‘beta’ testers for rest of the customers and which one can stay with previous release for couple of weeks.

Deployment is done from one centralized place without customer IT issues and headache.

This is of course under the assumption you choose the right infrastructure and right tools to deploy. We’ll deal with it in one of our nearest posts.


If you’re already deploying SaaS solutions I’ll be happy to hear your voice and feedback on below or in my LinkedIn page.


Have a great week,


Virtualization – the challenge and the opportunity

In the recent years most of us delivered software into virtualized environments and not to physical hosted machines.

It can be vmware ( , openstack ( or virtualbox (

The common for these infrastructure is that from now on it’s not so important what is the hardware you’re running on. It’s much more important what is the virtual parameters you’re running with.

You can run distributed environment on any hosting solution like Amazon AWS ( DigitalOcean ( or 1and1 (

The benefits are endless and move from more flexability into full management and cost effective or resources.

Apart from this we should consider two main factors:

1. At the end there is overhead in running virtulized. It can 5-10 % of overhead only but it can be in some cases 30-50% due to special requirements and wrong configuration. You should know what are your main needs (Memory, CPU or Disk) and work accordingly.

2. For IT experts this move is not so trivial. From domain specific knowledge IT should support now large variety of software and configuration that are look the same but might behave diffrently. This is a challenge and opportunity.

Many organizations felt to understand this mind shift and tried to adjust current paradigms into this new world.

In order to benefit from virtualization both IT and software development need to be changed.

Interesting article can be found here – (New Technical Roles Emerge for the Cloud Era: The Rise of the Cross-Domain Expert).

Also software development should be changed and I’ll discuss it in one of my upcoming posts.


Have nice week,



The benefit of decoupling – a.k.a JSON revolution

We all remember the basic object oriented programming rule of decoupling – You can read about it here – and here –

In the recent years this paradigm became more important while many companies are writing their code and product in different program languages.

If up until 2-3 years back a company wrote their code with backend of java and front end of JSP and JSF today in most cases for frontend is written in JavaScript while backend can be written in almost any program language.

In order to support this today most of the backend developers are exposing their code via REST API which can be accessed by and computer language easily.

With this a product can be combined from Java, PHP, Python, JS and many other program languages and developers types and all of them are using the same language at the end -> JSON

The main challenge is performance, robustness and security of such combination.

We’ll drill down into those in our upcoming posts. Stay tuned….

json logo


Solr vs. Elastic Search


In the recent year I was exposed to two strong Search engines for the enterprise. One of them is Solr (

and the other one is Elastic Search (

There are many posts and sites in which there is detailed comparison between the two:

And more.

My intention is not to do such comparison but to share my technical feeling about them.

Basically Elastic is more about NoSQL DB with the twist of full text search capabilities while Solr is pure search engine.

If at the end you’re programmer that requires some search capabilities over NoSql DB Elastic is your choice. However if you’re NLP guy which need deep full text search capabilities than for sure you should choose Solr.

I worked in two different project and in each one of them choose different approach,

Suggest you to download and check by your self.

You can always contact me and I’ll try to do my best in guiding you with the best solution for you.


Good luck