Sunday, 6 December 2015

Clojure X 2015

Clojure eXchange 2015 has just finished and I must say that I thoroughly enjoyed it. The feature packed two days of the conference has passed by in a blur.  I've been to other IT conferences before but this was my first one entirely devoted to Clojure.  It was a great forum to meet kindred spirits as well as the added bonus of meeting some of the authors of open-sourced Clojure projects.



It was also gratifying that as well as the comprehensive and growing Clojure ecosystem, one could witness also the large number of real world applications written in Clojure. This is a testament to Clojure's burgeoning maturity as a programming language.

I'm pleased to announce the majority of talks are already available on Skillsmatter's website.

All of the talks were excellent. My particular favourites were Bozhidar Batsov talk on Cider. He's one of the authors of Cider and gave an informative but also very humorous talk on the some of the new features of Cider, an Emacs editor for Clojure. I've never used Emacs in anger but I'm encouraged to try and learn because of Bodzhidar's enthusiasm.

I was also impressed by J. Pablo Fernández talk on What is a Macro?. This highlighted one of Clojure's core strengths as it showed how easy it was to add new language features by the use of macros. He demonstrated it by an example of using JUnit assertions. Typically in an object-orientated language such as Java if you need to add assertions for new types it results in numerous overloaded methods which in essence are fundamentally doing the same thing. J. Pablo showed how this could be achieved by a single macro. The behaviour is captured in one place but the use can be used by several types. This showed the inherent power of macros. Something for me to investigate further.


Malcolm Sparks presentation on Yada was also very interesting. This is a Restful Web Service API for Clojure supporting all REST maturity levels. Having used predominately Java frameworks,  I was impressed by the succinctness of the API, where most of the plumbing or infrastructure code is hidden so one can concentrate on solving the business problem at hand.

I'm not a front-end developer but I was still intrigued by Kris Jenkins presentation on using ClojureScript: Architecting for Scale. Kris showed how to forgo the use of MVC pattern and how to write simple page applications so incidental complexity is kept as a minimum. This is something to adopt no matter what type of programming you do. The pattern he described as been released as new library Petrol in which the use of reactive patterns becomes paramount

Looking forward to Clojure 2016 already.

Thanks to Chris Howe-Jones for organizing a great event.

Monday, 12 October 2015

BDD with Cucumber School



I have been a BDD practitioner for a few years, a combination of self-learning and attending various user groups and getting useful feedback from my peers.  BDD is something you pick up gradually and gets better with practice. My personal experience when starting to use BDD was that it can be a form of information overload because one is not sure where to start. With newcomers, sometimes the focus is on the tools rather than the extracting and understand the actual behaviour of the business. Some people think they've practised BDD if they just write scenarios and automate their examples, a tick box exercise. But BDD is and should be larger than that.

I was fortunate to be asked to peer review a set of online training videos from the Cucumber School, the team who originally created Cucumber. The course assumes no prior knowledge in BDD concepts or the use of Cucumber. Every video has a range of exercises that you can perform after watching to reinforce your learning.  This is great as it means that the videos are not a dry exercise in memory retention but rather an impetus for self-learning.

I was gratified to see that the initial lesson were geared towards fleshing out behaviour via conversations or examples rather than diving straight into the tool. This is achieved by the need for distinct user roles in defining and refining those examples.




A team is usually built of different disciplines such as developers, testers and a business representative such as a product owner. What was hammered home during the lessons was the need for continual collaboration  and communication between all team members so that the specifications constructed were effective and garnered common understanding.

An exploration of the Gherkin syntax was covered. Gherkin is a business readable DSL that describes the behaviour of your software without detailing its implementation. You can think of this as a form of standardization in how your scenarios should be constructed. Cucumber School gave an excellent explanation on Gherkin and how to tailor scenarios effectively.



Later lessons then dived into the actual Cucumber tool itself, right from setting up the IDE, generating skeleton code from scenarios and then fleshing out the implementation. This was all performed in an iterative manner.  I found this extremely beneficial because it demonstrated how to build up the executable part of a specification gradually.  The code was incrementally built to support the individual steps of a Gherkin scenario. The re-use of step code across similar steps was a fundamental point through the judicious use of regular expressions. Again no former knowledge of regular expressions was expected.  A beginner could still pick up the salient points and write their own expressions by the end of the lesson.  Emphasis was also placed on the readability of scenarios so that they were kept relevant to the business and their intent was clear.

When devising steps, another point conveyed that the steps should really be a thin veneer across a domain model. This is important because the domain model should be able to exist independently of the BDD framework. TDD was heavily enforced to flesh out the model that the scenarios suggested. Constant refinement of scenarios and unit tests was encouraged. A great point was made about listening to tests because they were a reflection on how well the domain was understood and also on how well that domain or scenario could be tested.  Removal of whole scenarios was also encouraged if those scenarios no longer had any value.  A scenario may have been written which in retrospect duplicated  or was a simply a generalization of another scenario which captured that behavior more succinctly. In that case, the scenario should be pruned.  Refactoring of code but also refactoring of the scenarios themselves was a salient aspect of the training. For instance, some scenarios were better described with the use of tables. This was all to keep the scenarios understandable and focused. Another point that was driven home was to strive to get the scenarios described at the right level of detail. Enough for the feature to be adequately described but not so little that the scenario became too vague or not too much detail so that the implementation did seep into the scenario.


A practice which I intend to follow is Example Mapping. This is  a great tool during the inception phase of a project when one is trying to understand the problem to solve whilst also taking note of those questions we still have to answer. Example Mapping  is a technique to break stories into rules and examples. This is a great example of Deliberate Discovery where we try to get feedback for the application we're about to build before we actually write any code. By focusing on examples, we get a feel on how the application will behave before its existence.






I like that a holistic view of development was portrayed in the sense that BDD and TDD were not considered disparate, but practices to be followed at different levels of granularity. A great phrase that resonated with me was that BDD enforced outside-in development.

BDD ensures you're building the right thing, but TDD ensures you're building the thing right. 





Also emphasized that BDD encourages different types of testing should be covered at different levels.  For instance, there should be a plethora of TDD tests at the bottom of the scale, then system or integration tests to tie the domains together with BDD or scenario level tests above that.   At the top should exist exploratory testing. What sometimes happens is that although automation is in place, there is still a substantial amount manual testing undertook, which is both laborious, slow to complete and error prone.







BDD encourages that the higher up the pyramid one goes, the less testing there should be. All of the testing permutations happen lower down so that as we rise, although the tests become fewer, they become more important.



The final lessons touched upon testing the domain model from a web-front-end.  Automation was achieved via the use of Selenium.  I'm not a front-end developer, but the example shown was easily understandable and a great base to build upon for future front-end testing.

It was great to see the hexagonal design pattern being promoted to facilitate easy swapping of testing the domain directly or from the front-end.  This pattern also known as a port and adapters allows one to separate the core business logic from anything that touches the outside world such as front-ends or databases. This enables the domain to be tested either directly or from a browser. It was a great example of front end testing but also defining scenarios so that the domain was separate from any front-end concerns.




(Reprinted from Nat Pryce's Ports and Adapters Architecture)

As I proceeded through the lessons, any questions I had on BDD on testing or the right way to approach devising a domain model were answered by later lessons.  This was a great way to learn and for those points to stick in the mind.

A lot of people still think BDD is a tool and is only about testing. For a long time, I was one of those people too. I believe BDD is still poorly understood and frequently misappropriated. But following the Cucumber School lessons, I think a lot of the misnomers of BDD will be eradicated. At the end of the course I feel I've gained several approaches to finding solutions to problems in new ways.

So in conclusion,
  • The videos and related exercises are excellent in describing the benefits of developing software following a BDD approach
  • I liked that the modus operandi was to adopt a reactive approach rather than trying to get everything correct the first time round.  Recognizing shortcomings in scenarios or code and test smells led to feedback loops in which those were corrected later on. I think this was a great point to take away and remember.
  • I liked how BDD was presented as complimentary to other practices or development techniques such as TDD or DDD. 
  • The lessons are very good for newcomers but experienced practitioners will also find useful information. I've certainly learnt some practices I wasn't aware of, which I'll carry into my daily work. The series of lessons from Cucumber School come highly recommended.





Sunday, 9 June 2013

Code as you would order a burger



Recently during my work, I came across some code where the paradigm Tell Don't Ask came into play. You sometimes see code written that's not really using OO to its full strengths and still is of a procedural stance. There's often frequent examples where an object asks other objects for data and then does something with it. This is not really OO even though you may still be using other aspects of OO such as encapsulation, inheritance or polymorphism. It's still asking for data whereas the real value-add for OO is message passing or the behaviour of the objects rather than the data they may contain. I've come to understand this implicitly via the usage of mocks during testing. The use of mocks results in code that better follows the 'Tell, Don't Ask paradigm. One of the originators of mock objects noted that:

The trigger for our original discovery of the technique was when John Nolan set the challenge of writing code without getters. Getters expose implementation, which increases coupling between objects and allows responsibilities to be left in the wrong module. Avoiding getters forces an emphasis on object behaviour, rather than state, which is one of the characteristics of Responsibility- Driven Design - Tim McKinnon

Once I came over this mental hurdle, I found I wasn't primarily concerned with the data passed between the objects but really that the appropriate methods calls or methods (messages) between objects were observed. I used the Mockito mocking library and I find the most of my tests are interaction tests i.e. verifying that methods calls were made in the manner expected. State-based testing is still important but at the right place. This got me thinking about objects in terms of Alan Kay's emphasis on message passing http://c2.com/cgi/wiki?AlanKaysDefinitionOfObjectOriented. The following examples illustrate why telling an object to do something rather than asking it for its data is better.

I work a lot with enterprise integration and one of the examples that illustrates this point is message concatenation. During conversion from an internal message to one that that is supported by an external protocol. Dependent on whether a message is concatenated or not, then the message may be decorated with extra parameters to indicate that it is a multipart message rather than a single message.

In this example, the extra metadata parameters that get added are:
  • The message reference for the multipart message
  • The part number of the message
  • Total number of parts for the message
Implementation using "Ask" approach
A factory was used to create a map of parameters for message concatenation

Loading ....

The protocol converter that used this information was coded like this:

Loading ....


This bad for several reasons:
  • The MultipartMessageParameterFactory returns an abstract datatype. In this case a map, so users of this class have to know what keys to pull out the data. If this approach really must be followed, a domain object to hold the multipart parameters would be better, maybe a simple POJO with accessors for the different parameters.
  • The main problem with this approach is it results in Inappropriate Intimacy. The user is asking for some data and then assembling that data thereafter. The lookup keys for the map are made public presumably to mistakenly enforce DRY principles for magic strings within a unit test.
  • This class could become better by passing in the parameter factory in the constructor so it can be mocked. In this way, the unit test for this class will be simpler although it remains too complicated.

Implementation using "Tell" approach

The Tell is all about telling an object to perform some piece of work on your behalf rather than asking that object for data and then doing the work yourself. In the previous example, the protocol converter asked another object for the message concatenation info and then added those parts itself to the message. In the following example, we ask another object to do this bit of work for us. Firstly we introduce a domain object to replace the map use for the multipart parameters:
Loading ....
The factory now returns this POJO instead of the naked ADT:
Loading ....
Now the converter asks another object, MultiPartMessageUpdater, to do the message updating:
Loading ....
The code for updating the message is pushed into its own class. As well as enforcing the single responsibility principle, it also endorses the command and query separation principle so that the retrieval of the actual parameters (query) is separated from the addition of those parameters to the message itself (command).
Loading ....
As a byproduct, the test for the converter becomes very simple. Beforehand we would have to test every permutation of concatenated parameters. Now because code has been pushed to their logical place the test boils down to just testing the following behaviour:
  • Message should be updated if message is part of a multipart message
  • Message should be untouched if message is just a single message

Loading ....
The permutations for those 3 message parameters are pushed to where it should be namely the test for the MultipartMessageParameterFactory. Each test then tests what it should instead of testing the unit and all of its dependencies' behaviour at the same time resulting in a combinatorial explosion of test methods making the test unclear.

Conclusion

  • We've now got a class which now asks other classes do work i.e. operation requests instead of asking for data and doing the work ourselves.
  • By following this approach the single responsibility principle naturally falls out. Each class performs a single well, understood operation and only that operation. This makes those classes loosely coupled and become candidates for reuse in different contexts if needed.
  • Unit tests have become clearer and focused as they only testing a single object rather than a multitude. If you notice I've endeavoured to use constructor injection for any dependencies. This makes the testing of classes easier as I can mock out any external dependencies.
I remember this paradigm by thinking what you would do if you ordered a burger in McDonalds. Would you ask the server for two sliced bun halves, some mayo, lettuce, a hamburger, cheese and tomato sauce or would you just ask for a Big Mac? :D
Links


Wednesday, 25 July 2012

Constructors that do too much


A flaw I've notice frequently in a legacy code base is constructors which are doing too much work. A common anti-pattern is that a configuration is passed into a constructor. This configuration is then perused to find information so that further objects can be constructed in the constructor. I will discuss why I think this is bad practice.

As an example I have a message dispatcher which delegates to a sender object
to send a message to a destination. A dispatcher usually makes a decision on the appropriate destination based on the message. For this example there is only one endpoint. This destination is to be read from the configuration.

So I have an interface for the sender:


and a dispatcher which uses the configuration to build a sender inside its constructor:


The expected usage for the constructor then becomes:

This is bad on several counts:

  • Abstract data type is passed in. There's no indication how this will be used or what it's used for. It's just a bucket of parameters. If possible a specific type should be used in preference to an ADT. It's all about the domain and context. Check out Domain Driven Design  or GOOS touches on this point.
  • Harder to unit test. I have to make sure configuration is correct so that internal objects can be created properly. In my example I've only one object to consider but they may be several. 
  • I can't mock the behaviour of any objects created by the constructor. In essence that behaviour is hard-wired which may be deleterious to testing if that behaviour is using a real resource i.e. socket or database. It turns the unit test into more of an integration test

Use a factory approach


So instead of passing in a configuration, pass in the already constructed objects. A DI framework such as Spring or Guice will assist you in this regard as it's fundamentally what DI is all about. You inject pre-configured objects into other objects rather than building those objects internally.

If you're not using a DI framework (I'm in this situation as it's legacy code) then it's better to pass a factory which builds the object on the fly as a constructor parameter.

To illustrate this:

I can have a factory that produces Sender objects. I've taken the liberty of introducing a generic factory for this purpose. It builds objects of the specified type for a given configuration.



Now I provide an implementation for the Sender:

And to illustrate the point I rewrite the Sender class:


Finally I rewrite the message dispatcher. The constructor now takes objects rather than configuration


And the usage becomes:



This is a lot better:


  • Now the dispatcher only knows about objects. It does not even know a configuration is used.
  • The transformation of a configuration into object is localized in the factory and not spread out across the code base
  • The class becomes trivial to unit test. Instead of worrying about configurations I can simply create a mock Sender and pass that to the dispatcher. In this way I can focus solely on the behaviour of the Dispatcher itself.


Conclusion

Resist the urge to pass around configuration objects and allowing constructors to build new objects. A constructor should ideally be used for assembly only. It's better to just pass pre-configured objects into a constructor. This makes the intention of that constructor clear and makes it easier to unit test the class in question as the colloborators can be mocked easily.

Sunday, 8 July 2012

Don't forget to refactor

Recently I've been a mentor of Test Driven Development to my colleagues. I find it interesting introducing TDD to developers who've never followed the approach and finding what I take for granted having practised TDD for many years.

TDD can be summarized succinctly: 
  • Write a failing test to demonstrate the required behaviour of the code
  • Write code to make test pass

(Thanks to Guido Maliandi for the diagram.)

I find that the last step is often ignored or is not taken as an equally important step by TDD practitioners. TDD detractors will often point to a lack of up front design and this stage is probably the culprit. I concede that if you pay no heed to design in TDD, you'll end up with a poor codebase. i.e. I've got a green bar, now I'll continue with my next test. That's why the refactoring stage is important if not the most important stage.

When you've got a green bar, you have made a step in the right direction but it's not the last step you should take. The presence of a working unit test gives confidence that the code can be refactored. When I perform the refactoring stage, I look for any code smells  I may have introduced. This is an important consideration as I find TDD newbies dither a lot on whether the code they have written is 'perfect'. The initial goal should be to get that green bar. Once you're there, then you can consider other issues during the refactoring stage. 

The refactoring stage should also be a point where you ask questions of your code at a wider scope:

  • Does the code fit into the overall architecture?
  • Does the code achieve non-functional requirements i.e. performance ?
The thing about writing tests before production code is that you end up without complex classes. The test you write forces you not to write a complex test with lots of stubs. As a result you will create small classes with only one responsibility. Your code will be decoupled, flexible and configurable. You don’t have to worry about S.O.L.I.D. principles. They will naturally emerge.

The refactoring stage is an often overlooked stage. Lack of focus on this stage leads to build of code entropy resulting in technical debt that will eventually need to be repaid. It should be kept in mind TDD is not mainly a testing strategy, but a design strategy. The tests written first results in a better decoupled design. A better decoupled design in turn is easier to refactor. A case of positive reinforcement.  The code will also be easier to maintain and add new features should they be needed.

Friday, 4 November 2011

JAX London 2011

As a member of the London Java Community, I was gratified to hear of a Java conference, JAX London, on my own doorstep and eagerly signed up. It had been a while since I've attended a Java conference. I used to be a regular attendee of JavaOne but over the last few years, it has lost its lustre, especially now as it seems to be a bolt-on to Oracle OpenWorld. Maybe that'll change in future.


Day 1:

I eschewed the workshops on Android and JEE6/7 development in favour of talks focused on Spring technology. There were some excellent talks given on the new additions to Spring such as Spring Data as well as the upcoming functionality in the next version of Spring. There were also some good introductions to Spring Batch and Spring Security. A talk was also given on the interoperability of Spring and Scala which was informative on the use of Java DI frameworks with Scala rather than using inherent Scala approaches (see Cake pattern).

The most useful session in my eyes was the round table discussion at the end of the day in which the audience were allowed to ask questions to the Spring creators themselves. This was especially fruitful as you gain those nuggets of wisdom from these experts in their fields which you wouldn't normally get if it wasn't a vis-a-vis conversation.

Day 2:

My second day at the conference was mostly Agile focused. I attended a very good talk on the Scrum Product Backlog by Roman Pichler of which many salient points of advice were offered.

The next talk addressed software quality. The first talk on Software Craftmanship by Sandro Mancuso. This talked reinforced the idea that software development is a craft rather than purely engineering. He extolled the principles of the Software Craftsmanship movement, promoting self-improvement, knowledge share, professionalism, passion and care.  I resisted the urge to jump up from the audience and shout 'Amen brother' but that was what I was thinking. The next talk titled Slow and Dirty by Jason Gorman refuted the notion about that we should release code dirty due to unrealistic deadlines and worry about the clean up later. One of his memorable phrases was 'Anaerobic Software Development'. I'm sure a lot of developers will concur with this sentiment. This describes situations where development teams start projects at an unsustainably fast rate, causing entropy in their code to build up. This causes intense pain after a short while which results in the team having to stop for weeks, maybe months until enough detritus is removed so the project can start moving forward again. If you're lucky that is, sometimes there's so much build up (holding back on the profanity :)), that the project must be dropped entirely and a new one must be started. His guiding principle was that if you care about quality all of the time then you don't get into these situations. Next time a manager, tells you we need something quick and dirty, cut corners on quality, to get something out of door, you should be prepared to push back as much as possible. You're only making a rod for your own back and building technical debt which must be repaid later.

Given my interest in concurrency, the next session I attended was on message passing by Dr Russell Winder who opined (with many interesting, witty and funny anecdotes) about why the shared memory multithreading, the prevailing wisdom of currently popular languages should not be used. Instead they should be replaced by higher level constructs such as Actors, CSP and DataFlow so that issues seen in contemporary approaches are eradicated. He then gave a demonstration with Actors via the GPars library of which he is the author. The 50 minute talk did not allow him to give a fuller treatise of the subject but it has definitely piqued my interest in that library for further investigation.

The last session of the day was giving by Martijn Verburg and Ben Evans on some of the new features of Java 7. They then delivered an open coding session on some of the features of Project Coin. Unfortunately for me, this turned out for me to be a thought experiment as I didn't bring my laptop to the session, although I did get to ask Martijn some questions on some of the new concurrency features in Java 7.

Day 3:

In my previous blog, I opined about whether had Java had reached its peak. I may have to revise my opinion somewhat after two excellent keynotes from James Governor of RedMonk and Simon Ritter of Oracle. The latter reinforced my view that the really interesting stuff will happen in Java 8 with respect to lambdas and modularity. It was also interesting to see Oracle's roadmap for Java in that they intend to release a new version of Java every two years. The roadmap given was up to 2019. Wonder what Java will look like then?

Fredrik Öhrström of Oracle then gave a very informative talk on some of the expected features in Java 8 specifically lambdas, map filter reduce and interface extensions. Following the takeover of Sun by Oracle, there are now two competing JVMs i.e. JRockit and Sun Hotspot. There is now a plan is for JRockit functionality to subsumed into Hotspot VM with the non-enterprise JRockit features to be added later incrementally.

The next lively talk on Performance Tuning resulted in a well known adage, 'Measure don't guess'. The presentation centered on a mal-performing web application. By measuring the throughput and load using different tools such as JMeter, VisualVM and vmstat, they showed how to investigate and eventually find the culprit. One should never shoot in the dark when performance tuning.  A scientific approach should be followed such as baselining your application before any measurements are made so as to ensure that any changes actually lead to an improvement rather than degradation.

I then attended another talk on Java 8 concurrency giving by Martijn Verburg and Ben Evans advocating the use of Java concurrency library rather than relying on outdated constructs such as using synchronized. They also gave an on why parallelism will become more important in the upcoming years.

Changing tack for a few sessions, I then attend Ted Neward NoSQL session. This was a talk on what NoSQL actually meant as there's a lot of ambiguity in the community on this fundamental point. He then compared some of the common NoSQL variants such as db40, Cassandra, MongoDB and the typical situations where they could be used, a point sometimes missed. Use the right tool for the right job. Ted Neward is a great communicator and the session was enlightening in all respects.

The final session of the day was given by LMAX on their Disruptor Pattern. This was very interesting in many ways, as it seemed to go against the grain of previous sessions on concurrency. LMAX is a financial exchange so latency throughout their system is extremely critical. Through empirical evidence they demonstrated that accepted approaches to concurrency were non-performant due to the overhead of dealing with the JVM and JMM. This was especially interesting as being a Java programmer we're shielded from the low level details of the architecture your program is running on and assume that we need not worry about it too much as it's the JVM problem not mine. This is no longer the case. We know have to be increasingly wary of latencies between the processor, the L1, L2 caches etc and the main memory, as well as the JIT assembly code, a mechanical sympathy if you will. Their innovative solution rests on using a lock free ring buffer (the fundamental data structure at the heart of the Disruptor) rather than using the traditional work queue/thread approaches. I've not really given the Disruptor pattern the attention it deserves, as it really is a sea change in how applications could be architectured both from a business logic and data point of view. I will definitely be doing some investigation on this topic. There is a great introduction to this given by Martin Fowler and it's also advantageous that the Disruptor is an open source project.


Last thoughts:

Sadly though there were at least 10 other sessions I would like to have been at. I would have loved to followed Ian Robinson's session on Neo4J as well as attended some of some of the cloud offerings (some former colleagues of mine from Cloudsoft were presenting) but alas the timetable didn't allow me the opportunity. All in all, a feature packed 3 days which ended far too quickly. It's great to meet a lot of kindred spritis who were as passionate about technology as I am. I'm already looking forward to JAX 2012 :).

Wednesday, 12 October 2011

Peak Java?

Has Java peaked? To me the Java 7 release feels like it did when Java 3 came out; some gravy no meat. It's not a ground breaker as the evolution from Java 1.1 to 1.2 or when Java 5 was released. Sure there has been some nice language improvements but for me, Java as a language has pretty much stabilised. These new syntactical changes are just glossing over pain points for which they are ample (although sometimes arduous) workarounds.


The big changes I see in the future in Java 8 are the Java Module System (JSR 277) and Closures (JSR 335). The latter is needed to make Fork/Join more useable. Closures may have made a bigger impact a few years ago, but now with a plethora of other JVM languages, I feel this will no longer be the case.


What's becoming increasingly apparent is the increasing importance of  Java platform. The important thing is not the Java language itself but the JVM.  This acts as a substrate for different languages. Although from above the languages may seem different, it ensures that underneath the covers the the behaviour is consistent.


Some examples of JVM languages which have gained traction over the last few years are:
And we have some new kids on the block:
  • Ceylon, Red Hat's Java competitor
  • CAL, a Haskell-inspired functional programming language.
  • Gosu (programming language), an extensible type-system language compiled to Java bytecode.
Maybe Java has reached its peak. It's certainly looks like it reached its plateau. I'm looking forward to using Java 7 and 8, but for the JVM improvements only. The platform matters more than the language now.