A long-standing joke about the Hadoop ecosystem is that if you don’t like an API for a particular system, wait five minutes and two new Apache projects will spring up with shiny new APIs to learn.
It’s a lot to keep up with. Worse, it leads to a lot of work migrating to different projects merely to keep current. “We’ve implemented our streaming solution in Storm! Now we’ve redone it in Spark! We’re currently undergoing a rewrite of the core in Apache Flink (or Apex)! … and we’ve forgotten what business case we were attempting to solve in the first place.”