Function declaration vs Function expression in JavaScript

In my previous blog, I introduced you to the basics of function declaration and function expression in JavaScript. In this article, we will answer the following questions:

  1. Why do we need a function expression ?
  2. What is the difference between the two ?

Why do we need a function expression when we have function declaration ?

One of the areas where this can be extremely useful is when you want to pass a function as a parameter to another function and return a function from another function. Let us consider the code below :

function callbackRef(callback) {
  callback();
}


callbackRef(function printHello(){
  console.log('Hello');
});

callbackRef(function printHowdy(){
  console.log('Howdy');
});

The output of the above is Hello Howdy. To break this down, Lines 6 and 10 call the function callbackRef and pass the entire function as a parameter to the function callbackRef. This is accepted by the parameter callback and then we finally execute the function by : callback(). This is the same as function expression where we say :

var callback = function (){

}
callback();

Using function references in the way mentioned above, we can compose and add dynamic behavior. Another use of the function expression is used to create powerful design pattern called the Module Design Pattern which I will explain in another blog.

What is the difference between a function declaration and function expression ?

In JavaScript we can do the following :

printToConsole();

function printToConsole(){
  console.log('hello');
}

To our suprise, the program runs fine and it prints hello even if the function is called before it has been declared. The reason behind this is a concept called hoisting. The English meaning of the word hoisting is something that is raised. Applying this to the function declaration, function declaration is hoisted. What does this mean ?

Mental Model:

//Imagine the function above to be picked up like this :
function printToConsole(){
  console.log('hello');
}
printToConsole();


A simple mental model to understand this is to imagine the function declaration to be hoisted(picked up) in it’s corresponding scope. Now with this understanding, I hope it is clear why the function runs successfully.

Now consider the same example using function expression :

pToC();

var pToC = function printToConsole(){
  console.log('hello');
}

I have used function expression here instead of function declaration. What do you think happens here ? Does it run successfully ?

Well, it does not. It doesn’t run but we also get the following error :

“error”

“ReferenceError: pToC is not defined

In case of function expression , there is no hoisting. As the function expression in not raised up, pToC() throws an error.

Conclusion

The main difference between function declaration and function expression in JavaScript is called hoisting. This difference is more at a conceptual level.

I will dig deeper into this concept of hoisting in my next blog.

Function declaration and Function expression in JavaScript

If you are a backend developer who has never tried JavaScript before, learning JavaScript can be an exhilarating experience but at the same time it can give you a feeling of going  down a rabbit hole. I am going to write a few articles to save a fellow backend developer both time and frustration.

There is a certain vocabulary involved which you should be aware of when you start learning JavaScript.If you are a backend developer wanting to get your hands into JavaScript, there is a good chance that you will get to see a lot of code before you actually starting coding. During this process of learning you will see functions being written in different ways and sometimes this can be a point of confusion.

This article will introduce you to a couple of ways of writing functions in JavaScript.

First way to write a function is:

function calulateTax(amount){
 ...
 return amount * 10;
}

This is called a function declaration. If you are a Java developer, you wouldn’t be too surprised. Keyword function followed by the name of the function, parenthesis to pass any parameters and finally the function body. We need not mention the type of the parameter passed to the function or the type of return value.This function will be executed when you call this function as shown below:

calculateTax(2000);

Second way to write a function is:

var taxCalculation = function(){
   return 20000;
}

This is called a function expression. There is no name(anonymous) given to the function above, it is being referred to by the taxCalculation variable. You could add a name to the function if you wanted.

var calculateTax = function taxCalculator(){
  return 20000;
}

The name of the function, taxCalculator may be useful in stacktraces and inside the function (to call it recursively). To call this function, we need to use the name of the variable :

var taxableAmount = calculateTax();




Third way to write a function :

var calculateTax  = () => return 2000;
var taxableAmount = calculateTax();

or simply

var calculateTax  = () => 2000;  // implicit return as it is single statement
var taxableAmount = calculateTax();

If you are a Java or a .Net developer, this third type should be easy to grasp, this is an arrow function expression in JavaScript.

Conclusion

At a broad level, there are 2 ways of declaring functions in JavaScript : function declaration and function expression. I want to keep this article short and not get into the differences between them.

I will get into the differences between the two in my next blog.

Java 8 : Imperative vs Declarative style of coding

The focus of this article is to compare the imperative style of solving a problem vs declarative style using a couple of examples.It assumes basic knowledge about Lambdas and Streams introduced in Java 8.

Example 1:   I do not wish to give you the problem statement on purpose. Look at the code below and try to figure out the intention of the code.

I am sure you read that code very carefully and possibly a couple of times. What does it do? You might read it again ! Most developers have a short memory. You know that you might have to revisit this code at some point and hence you will probably add comments in the code above like sort by name , remove last comma etc.

From a list of football leagues, the code checks if the league is the English Premier league, selects all the players who play for the English premier league where their age is <= 25 years. It then sorts them by their names and then finally returns a comma separated list of their names.

So our problem statement:  Given a list of football leagues, select all the players who play for the English Premier League and are <=25 years of age. Then sort their names and return them as comma separated.  Let us break this problem statement about what we want to do:

  1. Select all the players who play in the Premier league
  2. Remove all the players who are above 25 years of age
  3. Sort the names of the players
  4. Get their name and make them comma separated

Let us code this now using declarative style.

Declarative style:

Look at the code above and the problem statements from 1-4. Don’t you think that the code above reads more like the problem statement? Not only does it read like the problem statement, the code does not have any garbage variables, no for loops instructing how we want to iterate and select. We are focusing more on what we want rather how we want to do it. Hence it is also easier to make any changes to the code in case the requirements change which means the maintenance is easy.  

We could have used a .sorted(Comparator.naturalOrder()) above and done the mapping to get the name first before sorting.

Example 2:  Given a list of strings,we need to output the strings based on their length. If the input string is “I”, ”ate”, ”food”, ”slept”. The output should be

1 – I

3-ate

4-food,slept

Imperative style:

Hmm, one thing is clear that we need a Hashmap. The key will be the length of string and the value will be the actual string. So what are we doing here

  1. Create a new hashmap
  2. Iterate through the list of strings
  3. Create  a new list
  4. If the current string has length x and x is already in the map then get the list corresponding to the length and add the current string to that list, else
  5. Create a new list and add the length of string and the string length as key , value as actual string.
  6. Done !

In short, we are grouping strings by their length.

Declarative style:

What are we doing here:

  1. Group all the strings by length
  2. Done !

Again, the code is expressive and it translates to the problem statement very clearly !

The Collectors.groupingBy method returns a Map of Integer, List<String> or K, V where

K- the type that we are grouping by

V – The elements that we are operating on ( in this case, String)

Note that we have not considered the case where strings can repeat. This can be solved easily by using a Set instead of a List which would be pretty easy in both imperative and declarative style.

Declarative style does make your code more readable and concise.Declarative code might not be the magical solution to all problems, if you find the declarative style getting too complicated, stick to the old imperative style of programming.

Sorting a list in Java with null values

I am going to tell you a story with 2 characters, the developer and the quality analyst(QA). The developer is working on  a very simple application which deals with teenagers and the brand of cell phone they use.

I think the relation between developers and QA is like a one night stand, well, not the typical one! One fine day the QA finds bugs and thanks to your manager (who leaves home early), the developer and the QA are made to stay back and work all night until the bug is fixed and the code is completely bug free.(Yeah, right !)

Our developer friend has written a Teenager class which has an age field and a phoneBrand field.

Manager: We have a new requirement,there is a need for a survey and hence the application needs to return the list of teenagers by increasing age. If two teenagers have the same age, then we need to return the list considering the brand of the cell phones in alphabetical order.

Developer: This will take time. I need to make sure that I write the code using Object Oriented Principles.

Manager: Alright, I give you a days time.( You know he does not understand Object oriented principles !)

You bring all your Object oriented skills to the table, create a separate class for the comparator as the sorting is done using 2 different fields, the age field and then the phoneBrand field. The comparator class looks like this:

The input to your sort function is a List of Teenagers. You call the sort method like this:

Collections.sort(teens, new TeenageSorter());

Everything works fine.Given an input like this:

Output:

[Age:14 |Phone:Xiaomi, Age:15 |Phone:Apple, Age:16 |Phone:Apple, Age:16 |Phone:Samsung]

Sorted by increasing age and as there are 2 teenagers aged 16, the one with the Apple phone is followed by Samsung using alphabetical order. You check in the code and inform your manager that you are done with your task.

A few hours later, at around 6 pm when you are just about to leave, the QA guy comes to your desk.

QA : Your code is not working .

Developer : That is not possible ![Unpleasant stare]

QA : I am not sure, I added a teenager but did not give him a phone !

You think about it for a second, your coding skills (spelt  “ego”) are at stake, you have to come up with a reply.You know you left that part in the code but…

Developer: How can a teenager not have a phone ! [ You know in your heart that you never had a cell phone as a teenager]

QA : Well, yes, all teenagers do but currently our system allows me to do that, I followed the steps, kept the brand name of the phone empty, it crashed !

Well, you take a deep breath.You know you assumed that the data will never be null. You got what you deserved , a null pointer exception !

You go to your manager and tell him that it is possible that a teenager might not have a phone, so the requirement has to be discussed now. You involve the business analyst as well and explain the situation. The analyst says, well, if a teenager does not have a cell phone and two teenagers have the same age, we need to put them at the end of the list.

You have been asked to fix this in the next few hours.You start thinking about this problem. If the teenager does not have a phone, it means the phoneBrand field is null. Hence the sorting has to now deal with nulls.You look at your Comparator and this is what you come up with:

A shiny new method to sort the phoneBrand. This time when you test your code, you supply the following input:

Output:

[Age:14 |Phone:Samsung, Age:14 |Phone:Xiaomi, Age:15 |Phone:Apple, Age:16 |Phone:Apple, Age:16 |Phone:Samsung, Age:16 |Phone:null]

The list above is sorted by increasing age. When the age is the same,16, the teenager without the phone gets pushed to the end of the list as if the teenager is not wanted. You check in the code and leave for the day (night?)  and you are still feeling bad that the teenager does not have a phone.

The next morning, the QA does his testing, things are working fine!

QA : Everything is fine now.

Developer : Yes, I know !  [Arrogance is the key, isn’t it ! ]

Is there a chance to refactor your code ? You look at your code and ask yourself if you can refactor that code somehow. The comparator methods do not really look that clean. It is kind of buggy. It is not that clear when nulls starting popping up, should you return a 1 or a -1 ? It is more like trial and error at times. Imagine what happens if you are asked to sort by another field in addition to the 2 fields above? What if we add another field, earphoneBrand. If two teens are aged the same, have the same phone then sort by the brand of the ear phones. What if someone does not have earphones ?

You remember reading about Lambdas in Java 8 and start exploring if they made any changes to the Comparator interface. On further reading, you actually discover a lot and come up with the following analysis:

  1. Comparator present in the java.util.package is a functional interface and hence..
  2. Usage of the anonymous inner class can be replaced with usage of Lambdas.
  3. There are default methods in the Comparator interface now which can also sort objects if they are null.

So let us revisit the problem statement:

Given a list of Teenagers:

a) sort the list by age first

b) if age is the same then sort by phone brand name and

c) if the teenager does not have a phone(null), then it should be kept at the end for that particular age.

Declarative style:

What ? What is that ? Let us break that down a little bit, following a, b and c above.

The code on lines 1 and 2 creates a comparator to sort by the phone brand and instructs how to deal with null values. Line 4 combines the two, if the age is the same then compare with the phone brand. The part, Comparator.nullsLast(Comparator.naturalOrder()) means that when we get 2 phoneBrand objects, we can have various combinations. Both null, one of them null and none of them null. The code says, use natural order for sorting phoneBrand fields and if you encounter a null, then put them at the end of the list.

I must admit the code does need a little bit of explanation this time, but it is still expressive enough and probably takes getting used to the function composition part. But the code flows well, any kind of changes to it are quite easy rather than the imperative style. The focus is more on what you want. If you get a new requirement of putting the teenager without a phone at the beginning of the list, all we need to do is modify the code to use Comparator.nullsFirst above. If we need to sort by another field, add another thenComparing().

You are so enthusiastic about Java 8 but then it dawns upon you that your manager has not yet approved that you switch to Java 8 and you are still coding with an old version. Well, send him an email and tell him that you feel like this:

outdated1

 

Well, if you are still stuck with prior versions of Java , there are other libraries to do that. One solution is to use the Google Guava libraries. The Google Guava library has something similar to the Java 8 style of composing functions.It uses chaining of comparators like above.

This chaining is not very different from the Java 8 style, in fact a few things in the Google Guava library have served as an inspiration to the designers of the JDK library. So if you cannot switch to Java 8 yet, you could use the google guava version, it is much cleaner that writing a comparator of your own with all null checks and other checks.

It has been a couple of days since you checked in the code, things seem quiet, the QA, Business Analyst and the Manager, nobody has turned up at your desk. It means everything is working fine !

Not a JavaScript problem : 0.1 + 0.2 is not equal to 0.3

I have been a back-end developer for some time now but I have decided to learn some front end (JavaScript) stuff too! I know, you must be thinking , it’s 2017, why on earth have I  decided to learn vanilla JavaScript ! May be I should be learning frameworks like ReactJS, Angular, Vue. Sure, why not, that is the idea but I would like to get the basics (JavaScript) right first.

I have been learning the basics of JavaScript and it turns out that quite a number of people mention a few WTF’s with JavaScript . One of them being:  0.1 + 0.2  !== 0.3.  If you are in a state of shock now, well , you should be there for a few more seconds….

Time to come out now ! Well, this is not a problem with JavaScript . Don’t believe me ?

Let us try the same thing in a different language:

Java:

Output 

0.30000000000000004

So obviously this is not equal to 0.3

Right way to do it : Using BigDecimal

If you tried the same in Python, you will still get the same unexpected result.

Right way to do it in JavaScript : Using toFixed function

Using the toFixed function from Number object helps us in setting the number of digits after the decimal point.It returns a string and hence we need the + operator at the beginning.

So let us not give JavaScript, the language, criminal status for an offense it never committed !

But why is 0.1 + 0.2 ! = 0.3, take a look here : Floating point Math

Conclusion

Let us not find incorrect reasons to criticize the language. Learning JavaScript so far has been a fun learning experience. Of course there have been some WTF moments as well.

JavaScript is powerful ! I urge you to learn it.

Introduction to JShell : The Java Shell

In this article we will be taking a look at the new tool, JShell, which is a part of the Java 9 release.

What is JShell ?

It is an interactive tool or shell also referred to as REPL ( Read- Evaluate-Print loop) that is packaged as part of  Java 9 release. REPL is basically Reading input or data typed by the user, Evaluating the input , Printing the result that was evaluated and then going back to read the next input(Looping).

Why JShell ?

It is a very handy tool for developers both old and new wanting to explore the Java language by trying or experimenting snippets of code.

First time Java developers who wish to get started with the Java language can use JShell to write small pieces of code and later on move to IDE’s like Eclipse, STS, IntelliJ to set up full fledged projects.

For the experienced Java developer who wants to explore a new API or try out short  examples, the JShell can be quite useful. Let us say you are in the midst of fixing a bug or implementing a new functionality in an existing class using an API you have not tried before. Typical way to do this would be to create a new class with a main method in your code repository, type in the sample code for the API, right-click and run. Finally, we would copy the code and paste it somewhere in the existing code. As we age, we tend to forget a few things like deleting the sample file with the main method or sometimes we might even create a main method in an existing class where the new code is needed. While checking in, we might just check in the file with the main method only for your lead to do a code review which will obviously be rejected. This is assuming your lead does a proper code review !

Starting JShell

Open a command prompt, go to your  <Jdk 9  Installation>/bin directory and type jshell. You can also add it to your environment variables so that it can be started from any directory on the command prompt.

Some of the things we can do with JShell

  • Additionaddition

You don’t need JShell to check that 2+3=5. However make note of the variable created above , $1. This is a temporary variable. But, did you realize that we did not write a class with a main method with a System.out.println to print the result of 2+3 ?

  • Exploring Lambdas, the shiny new feature – Wait a minute, these are more like shiny old features introduced in Java 8!Consumer_Lambda

Did you notice the nice lambda created above ?

  • Can’t remember the method in the Consumer interface?Consumer_Lambda_Tab

Type in the name of the variable(s) created above , followed by a dot and press tab ! You get a bunch of methods that can be called on the Consumer interface. For the Consumer interface, it is the accept method that we are interested in.

  • Let’s try something with Streamsstreams

The moment you create a Stream which is referenced by the strings variable above, notice the type of strings. It is a ReferencePipeLine, JShell returns the type of the variable too!

  • Create a class, declare methods and variablesclass

Notice that after typing class Test { and pressing enter, JShell is intelligent enough to figure out that you are not yet done with the current statement and hence it waits for the next line of code.

  • Listing out variables declared so farvars

Get a list of all the variables using the /var command. To get a list of all the methods, use the /methods command.

  • Listing all the code typed in so farlist

Use the /list command. We performed an addition, created a Lambda , a Stream , created a class, then an object and called the getValue method. The /list command, simply lists them.

  • Typed in some code, time for a tea break, worried about machine crash ?save

You can save your code snippets using the /save command. Make sure you have the right permissions to save/create a file. You can follow that up with a /open command which will simply execute the code in the file.

  • Made a mistake, want to edit something ?editTyping the /edit command as shown above opens up a window where you can go and edit the code you typed. Using our knowledge of method reference, we can change the lambda expression from (String s) -> System.out.println(s) to System.out::println. On clicking the accept button, the change is reflected.

editpost_edit

Notice that the /vars command now lists the updated reference to the Lambda expression.

  • Create a package

Come on , JShell is not for that. The entire thing is just in a package. Now, don’t whine that JShell does not allow you to create packages. It is not meant to do that !

  • Want to quit JShell ?

Inform JShell by typing  /exit. JShell politely says GoodBye to you. Well, you could be a little rude and press Ctrl-D to exit JShell, in that case don’t expect JShell to say Goodbye!

Conclusion

JShell is a great addition to the JDK 9 release. Use it to experiment snippets of code and get immediate feedback. This article was just a quick introduction. JShell also has an API which can be used to integrate it into an IDE or a tool.

JDK 9 reached GA on 21st September 2017, to take a look at all the features, visit the openjdk site here.

Java 8 map and flatMap

In this post, we will be taking a look at the map and flatMap methods introduced in Java 8. We will be looking at a scenario where the map method does not produce the required output and why we really need the flatMap method. I will be using multiple examples to illustrate the same and also show you imperative and declarative styles of coding.

map: Let us consider the scenario where we have a collection of Person objects. Given a list of person objects, what if we wanted to return only the names of the people who have UK citizenship.

The imperative style of writing code would give us the following:

To get this, we have to initialize an empty list, iterate through the loop, filter the people who have UK citizenship and then add the names to the empty list we create.Let’s try and solve this the  declarative way, using the map method.

The map method basically takes an object of one type and gives us an object of another type.

Map_Flatmap_Pic1
filter,map and collect – Once the knob is turned on, everything happens !

The signature of the map method:

<R> Stream<R> map(Function<? super T,? extends R> mapper)

The signature looks complicated but it is easy to understand. The R is the output type of the stream and T is the input type.

Remember how the filter method (explained here) took a Predicate as a parameter? The map method takes a Function. This is also a functional interface. The function gets applied to each element. The p -> p.getName is a lambda expression, the function that gets applied to the Person object.  To understand this better, we could write the same thing as follows:

Remember that the map method takes one object and returns exactly one object. This is 1-1.

Few more examples of map API:

1.Given a list of numbers, we want to generate the square of each number:

We have input as a list of numbers of 1 type and we want to transform it:

Input: List<Integer> numbers = Arrays.asList(1,2,3,4,5);

 Output:

[2, 4, 6, 8, 10]

2. Given a list of string, convert each string to upper case:

Transformation of one type to another, this time String to String – use map function

Input:

List<String> strings = Arrays.asList(“abc”,”pqr”,”xyz”);

Output:  [ABC, PQR, XYZ]

3.Given a list of string, find the length of each string:

Input:

List<String> strings = Arrays.asList(“abc”,”pqr”,”xyz”);

Input to the map is a string and output is the length of each string.

Output: [3, 3, 3]

FlatMap: To understand the flatMap, let us consider a different example. Let us consider there are 3 systems and each system returns a list of strings. There is an application which combines these lists and sends the data back to us. Our system needs to take this input and generate all the strings as a single output.

List<String> system1 = Arrays.asList(“AB”,”cd”,”EF”);

List<String> system2 = Arrays.asList(“GH”,”ij”);

List<String> system3 = Arrays.asList(“kl”,”MN”,”op”);

//Combination

List<List<String>> input = Arrays.asList(system1,system2,system3);

Attempt 1: The input type is a List of List<String>.  We want to get all the strings from this input. We know that the map function helps us to transform an object. Will it help us here?  Let us take this step by step.

The call to the input.stream()  returns a Stream<List<String>>

This gives an output:

[AB, CD, EF]

[GH, ij]

[kl, MN, op]

When we apply the map function, each time we are getting a list. But we need the individual elements in that list as a combined result. How do we get that?

When we have a single list as shown below and we applied the stream() to method to it, what happened?

List<String> strings = Arrays.asList(“A”,”b”,”C”);

strings.stream()

              .forEach(System.out::println);

This gave us the individual elements in that stream. So will applying the stream method to the list above solve the issue? Let’s try

Attempt 2:  Applying a stream to the list and using a map

This gives a weird output like this:

java.util.stream.ReferencePipeline$Head@87aac27

java.util.stream.ReferencePipeline$Head@3e3abc88

java.util.stream.ReferencePipeline$Head@6ce253f1

This gives us a stream of objects. So the usage of the map method in this scenario is not right. This is because the map method as mentioned earlier takes an input and produces one output. But in this case, the map method takes a list and we want the individual elements of that list to be combined together. This is not what the map function does. We need to use a different function.

Attempt 3: Using a flatMap

This gives us the required output:

AB

cd

EF

GH

ij

kl

MN

Op

Let us break flatMap up into 2 operations:

map:

[ [AB, CD, EF]      [GH, ij]           [kl, MN, op] ]

  Stream 1             Stream 2           Stream 3

flatten it:

AB, cd, EF             GH, ij               kl ,MN ,op

The flatMap() does a map + flat. Flattening here is of the individual streams from each item in the list to a single stream. The flatMap() methods needs a stream and hence the list->list.stream().The flatMap method takes an input and can give either 0 or more elements.

Let us consider another example to understand this well. Let us consider 3 different football leagues. The English Premier league, the LIGA BBVA or the Spanish League and the Bundesliga. Each league has a list of teams. We can represent this as:

Problem: Given a list of leagues, we need to return the names of the all the teams in the leagues.

Imperative style:

Declarative style using map:

We have a list of leagues. When we call stream() on it, it will operate on a single League object. But a league has multiple teams in it.  If we try solving this using map() then we land up getting this:

Output:

java.util.stream.ReferencePipeline$Head@87aac27

java.util.stream.ReferencePipeline$Head@3e3abc88

java.util.stream.ReferencePipeline$Head@6ce253f1

We land up getting 3 streams. This means that the map operation is not the right fit here. We need a function that can take these streams and combine each of these elements in these streams to a unified output.

This is the scenario for a flatMap() as we have a collection of collections. So the input is  a collection of leagues and each league has another collection which is team.

When you have a collection of collections and want a unified output, use the flatMap.

I hope you have understood the basics of both map and flatMap methods and the scenarios in which they should be applied.

 

Using Spock to test JPA entities in a Spring Boot application

In my previous post we saw how to test JPA entities along with the Spring Data repository layer in a Spring Boot based application. We made use of JUnit and the AssertJ library.

In this post we will look at how to use the Spock framework to test the same. In fact, in this example, I will be using a combination of Spock based tests along with JUnit tests.To know more about the Spock framework, view the official site here. Spock is a testing and specification framework for Java and Groovy applications.

Let’s get started…

Setup using maven: ( pom.xml)  

  1. We need to add the spock-spring dependency. This will bring in the dependencies required to run Spock based tests in a Spring boot based application.
  2. Notice the use of <spock.version>1.1-groovy-2.4</spock.version>. We are overriding the spock version. Spring Boot 1.5.4 brings in version 1.0 of Spock, however this needs a @ContextConfiguration to run Spock based tests in a Spring Boot application. Overriding the version to 1.1 removes the need to add this annotation.
  3. In the plugin section , we need to add the dependency for groovy-eclipse-compiler which will compile the groovy code. Spock is based on groovy and hence using Spock to write tests means we write groovy code.

Test classes using Spock

As mentioned in my earlier post, let us consider the same example of 2 JPA entities, SocialMediaSite and Users ( OneToMany). A User has an email which we represented as EmailAddress value object.  The test class for this looks as follows:

EmailAddressTest.groovy

  1. The test class extends spock.lang.Specification. This is how you begin writing a Spock based test.
  2. Notice the method names are strings, nice descriptive methods names.
  3. The when/then syntax is for assertions. It’s like saying, “Hey, when this happens then check these things”.
  4. The where section is the first test method above is data driven.Notice the first 2 columns, emailAddress and a blank. This is because data driven tables in Spock need 2 columns. We need just one. The next rows supply data to the same method.Hence this method is run with all the values mentioned in the first column starting from the 2nd row. Now that is awesome compared to writing multiple methods which do the same thing or if you are using TestNG, this is done using a DataProvider.
  5. Notice that we have not used any Assertion library here. In Spock this is done using ==.

SocialMediaSiteRepositoryTest.groovy

Notice the @DataJpaTest annotation on the class. It  spawns an in-memory data base and runs our tests  against it. Along with this, the JPA entities are scanned, Transactions, Hibernate  and Spring Data are also configured. There is no need to add @ContextConfiguration as we are using Spock 1.1.

Running Spock tests and JUnit tests together

I have added the 2 groovy test files in src/test/groovy.  We can have tests written in JUnit too. I have a JUnit based test class in src/test/java. The groovy eclipse compiler dependency we added in the pom.xml compiles and run tests from both the packages.

Conclusion

This is my first experience with Spock framework and I have thoroughly enjoyed writing tests with it ! There is of course a lot more to the Spock framework. I hope you have enjoyed this quick introduction to Spock for testing Spring Data Repository and JPA entities in a Spring Boot application. The synergy between Spock and the testing changes made in Spring Boot since version 1.4 (test slices) is great !

You can find the project on github.

Testing JPA entities in a Spring Boot application

In this blog we will look at how to get started with testing JPA entities and Spring Data Repository in a Spring Boot based application. We will be using JUnit for the same.

I have observed that a good number of projects do not write any tests for JPA entities or the repository layers which make use of the entities to perform CRUD operations. Writing tests for JPA entities and Spring data repositories can be really effective in checking if all the entities are mapped correctly and ensuring that the repository methods implemented by Spring Data along with the custom methods that you write are behaving in the right way. After all , most applications always talk to a database and if your data is not being handled properly, what is the point of having a great user interface or a well designed business layer ?

Since Spring Boot 1.4, testing these layers has become quite easy and more focused. Let us consider a simple One-Many relation between 2 entities, SocialMediaSite and  a User.

A SocialMediaSite can have many users which is mapped using the @OneToMany JPA annotation.

SocialMediaSite.java 

User.java

Notice that EmailAddress is  a value object.

SocialMediaRepository.java – This is a Spring Data Repository interface( a proxy instance is created via Spring to back this interface)

SocialMediaSiteEntityTest.java –  Test the JPA entities.

The setUp method annotated with the @Before annotation above initializes some mock data that we can use for the tests.

The key takeaways from this class:

  1.  @RunWith(SpringRunner .class) – This brings together JUnit and the Spring test  module. The  SpringRunner class extends SpringJUnit4ClassRunner, so it is pretty  much the  same that was used earlier. Shorter class names are always pleasing to  the eye.
  2.  @DataJpaTest   This is the most important annotation for testing JPA entities in a  Spring Boot application. It  spawns an in-memory data base and runs our tests  against it. Along with this, the  JPA entities are scanned, Transactions, Hibernate  and Spring Data are also configured.
  3.  TestEntityManager   The @DataJpaTest also configures a TestEntityManager, this  is an alternative to the EntityManager. It actually makes use of the EntityManager  but has a nice set of methods like persistAndGetId, persistAndFlush etc.
  4.  AssertJ  The code above uses the AssertJ library to perform all the assertions, this  is a nice way to get all the assertions done very fluently ! This is pulled in by the  spring-boot-starter-test dependency.
  5.  Junit – This is also pulled in by the spring-boot-starter-test dependency

On similar lines, the tests for the Repository class can also be written as shown below.

SocialMediaSiteRepositoryTest .java

Conclusion

As you can see, testing JPA entities and the repository layer in a Spring Boot based application is quite simple. We don’t need configuration for the entire application(all layers) to test the database related functionality. Using the @DataJpaTest in Spring boot helps us in configuring and testing just the JPA section in the code.

Writing good tests for the JPA/Hibernate entities and the repository layer against an embedded database can be extremely useful in the long run. Any changes in the database schema or in the entity mapping which might lead to issues at run time can be caught immediately. In addition one can also see the SQL queries being executed which can be extremely useful.

You can find the code on github.

Note: In case you are are interested in testing JPA entities using Spring Boot 2, JUnit 5 and Java 14 , read my post here .

Getting Intimate with Spring Boot and Hibernate

In my previous blog , we looked at how to get started with a simple Spring Boot and Hibernate application. We managed to get our application up and running in a few minutes.

In this article, we will look at how and what Spring Boot does behind the scenes.

Basic requirement

To setup a Spring based application with Hibernate/JPA we usually need the following:

  1. Datasource  which connects to a database and details like the database url, password, username – this is really independent of whether we use Spring, JPA.
  2. JPA EntityManager to perform repository(CRUD) related operations.
  3. Vendor specific (Hibernate) properties
  4. Transaction support.

Let us take a look at how these things get configured.

In case of an in memory H2 database, all of the above was configured automatically by Spring Boot. We did not write a single configuration either using Java configuration or XML. Well, how does Spring Boot do everything for us?  Let’s take a look :

Vanishing (configuration) act explained :

1. The starting point is the spring.factories file. This file has a Auto Configure section which Spring Boot uses to determine what should be auto configured. This file is in the META-INF folder which is part of the spring-boot-autoconfigure-<version>.RELEASE.jar. The SpringFactoriesLoader class loads the spring.factories file.

2. Since we are talking about JPA/Hibernate, the spring.factories contains a (among other autoconfigurations) key value pair: org.springframework.boot.autoconfigure.EnableAutoConfiguration=\org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaAutoConfiguration

3. The key EnableAutoConfiguration above is triggered using the @SpringBootApplication annotation on the SpringBootJpaApplication.java file. This is our starting point in our application.

4. The value part above, HibernatJpaAutoConfiguration class which is part of the Spring framework looks like

One of the important annotations above is on line 4 :

@AutoConfigureAfter({ DataSourceAutoConfiguration.class }).

This indicates that we should perform HibernateJpaAutoConfiguration only after reading the DataSourceAutoConfiguration.class or only if a Datasource is found on the classpath. It would make no sense to configure JPA and Hibernate without a datasource/database !

5. The DataSourceAutoConfiguration looks as follows:

6. This is a normal Spring Configuration class which sets up the DataSource only if all of the above annotations satisfy our criteria. We used an H2 database and hence the DataSource.class and EmbeddedDatabaseType.class satisfies the condition.

7. Now the configuration moves back to step 4 where we needed the presence of  datasource/database at minimum. Now on to the next part where the following is checked :

@ConditionalOnClass({ LocalContainerEntityManagerFactoryBean.class, EntityManager.class })

This is configuration specific to JPA classes and the @ConditionalOnClass checks if these classes are there on the class path. There are present in our case through the Spring Data JPA library.

8.Next comes the @Conditional(HibernateEntityManagerCondition.class) . This checks if the JPA provider is Hibernate, this is done via the HibernateEntityManagerCondition.java. In our case it is true since Spring Boot brings in Hibernate as the default provider of JPA and HibernateEntityManager.java is on the classpath.

9.Once these conditions are satisfied, the HibernateJpaAutoConfiguration extends the JpaBaseConfiguration class.

This is annotated with the @EnableConfigurationProperties(JpaProperties.class).  This JpaProperties.class is annotated with @ConfigurationProperties(prefix = “spring.jpa”) . This gets triggered if we were to add any spring.jpa.xxx properties in our application.properties file. We would use this if we were to add any jpa specific properties. We did not add any and hence the default is assumed.

10.The JpaBaseConfiguration class contains all the other remaining configurations. It primarily uses the @ConditionalOnMissingBean annotation to configure the EntityManager, JpaVendorAdapter, LocalContainerEntityManagerFactoryBean, JpaTransactionManager if they have already not been configured. The @ConditionalOnMissingBean when applied on a method means – if this bean has not been configured, then execute the method and configure it.

So in this way, datasource, entitymanager, vendor(hibernate) specific properties and transactions get configured. If you have setup a Spring Hibernate application before, we would do the same using Java configuration or an xml and configure the above mentioned properties. You can configure any specific properties and in that case Spring Boot will skip auto configuring that property.

Conclusion 

All the Harry Houdini illusions or Spring Boot magic starts from the spring.factories file and then through the usage of some powerful annotations like @ConditionalOnMissingBean, ConditionalOnClass,@AutoConfigureAfter , Jpa/Datasource/Transactions gets configured depending on what is found in the classpath. Spring Boot will not configure a property or a bean if it was already configured.

Complaining is a Habit ?

I have heard people complaining about the so-called magic in Spring Boot. Well, when Spring folks gave us xml configurations, we complained ! They gave us configurations via annotations and Java Configuration to get rid of the xml, we still complained ! Now, things get configured automatically…..and we still complain. Complaining can be good at times, the folks at Pivotal have been listening to our complaints and have been giving us fantastic tools/ frameworks. But let’s not forget that the details are out there, let’s explore a little more before complaining.