How to find all duplicates in an array – Java and Kotlin

During a conversation with my colleague, he mentioned to me about a problem that he was asked in his technical interview. The problem was to find the duplicates in an array of integers. My immediate reaction – “That is easy and I am sure you got it !” But just then there was a two second silence from his end and then he said that he couldn’t get it right since some additional constraints were mentioned to him ‘later’.

Problem Statement

Find the elements which appear twice in an array of integers, where the elements in the array being : 1 ≤ a[i] ≤ n (n = size of array). We need to do this without using additional space in O(n) runtime.

Solution

Now, it was my turn to be silent for the next couple of seconds after he told me about the problem statement with the constraints about not using additional space and O(n) runtime.

We had a discussion about using a Set or may be a HashMap in Java since the main part of the problem was to find the “duplicates”. Using a Set or a Map would not be the right solution with these constraints. A nested for loop would also not be the right solution since we need to solve the problem in O(n) runtime. Well, let’s consider some sample data and try solving this step by step.

Sample Input data

The output for this input will be 10 and 1. They both appear twice. Note that the length of the array is 10 and the maximum value of any element in the array is <=10. This is an important constraint.

Let us solve this step by step
Elements of array, arr, where iteration starts at 0
  1. Loop through the array and for each iteration-
    1. Take the value at each index. Currently at 0, in the first iteration, the value will be 10
    2. Subtract one from this value. During iteration 0, the result will be 9. The reason we subtract one is because the value of any element in the array is <=n, where is n is size of the array. The array index starts from 0. So an array of size 10 can contain a maximum value of 10 but there is no arr[10].
    3. Use this result as an index. For iteration 0, this will be 9, go to element number 9 and negate the value at this index number 9
Why should we do this and what about the duplicates ?

Well, if we encounter another value of 10 in this array, the index obtained using steps above will result in 9 again. We access to value at 9th position which is -7 now. We check if this is less than 0 and if it is less than 0, it means we have already visited the ‘value at this current iteration‘. This means it is a duplicate value. So step 3 in the algorithm above can be modified as follows:

Loop through the array and for each element in the array, let’s refer to the array as arr, do the following –

  1. Take the value at each iteration. Currently at 0 in the first iteration, the value will be arr[0] which is 10.
  2. Subtract one from this, the value will be 9. Let us refer to this value as index.
  3. Using this index, access the value at this index, element at index 9, arr[9]. If the value at this index is less than 0, add arr[current_iteration] to the list of duplicate elements.
  4. Negate the value at this index.(currently 9 for this iteration 0)
  5. Go back to step 1, the next iteration which is current_iteration + 1.
Flowchart
Iteration 0
Iteration 1
Iteration 2
Iteration 3
Iteration 4

In this iteration number 4, when we access the value at arr[4], we get a -9. If we were to follow step 2 of the algorithm, we would land up with an index of -10 and then in step 3 we would access arr[-10] which is incorrect. So we need to modify the algorithm to take absolute value of the same.

Iteration 5
Iteration 6
Iteration 7
Iteration 8
Iteration 9 (Final iteration)
Java Code
import java.util.ArrayList;
import java.util.List;

public class DuplicatesInArray {

    public static List<Integer> findDuplicates(int arr[]) {
        List<Integer> duplicates = new ArrayList<>();
        for (int i = 0; i < arr.length; i++) {
            int index = Math.abs(arr[i]) - 1;
            if (arr[index] < 0){
                duplicates.add(Math.abs(arr[i]));
            }
            arr[index] = -arr[index];
        }
        return duplicates;
    }

    public static void main(String[] args) {
        int [] arr = {10,2,5,10,9,1,1,4,3,7};

        List<Integer> duplicates = findDuplicates(arr);
        System.out.println(duplicates);
    }
}

The Math.abs is required since we could have negated the value at an index in a previous iteration. If we don’t do a Math.abs, we will land up accessing arr[-index]. For the same reason, Math.abs is also required while adding an element to the list of duplicate values.

Kotlin code
fun findDuplicates(arr:IntArray) : List<Int>{

    var duplicates  = arrayListOf<Int>()
    for(itemIndex in arr.indices){

        val goToIndex = Math.abs(arr[itemIndex]) - 1

        if(arr[goToIndex] < 0 ){
            duplicates.add(Math.abs(arr[itemIndex]))
        }

        arr[goToIndex] = -arr[goToIndex]
    }
    return duplicates
}

fun main(){
    var arr = intArrayOf(10,2,5,10,9,1,1,4,3,7)
    println(findDuplicates(arr))
}
Conclusion

We have traversed the entire array and the list of duplicates has 2 elements: 10 and 1. The flip side of this algorithm is that the elements of the array are modified but we have not used any additional storage. We have also managed to do this in O(n) as this is just a single pass of entire array.

The code in Java and Kotlin is quite simple but you need to know the algorithm/solution for this particular problem before hand. This problem is an interesting one given the constraints of space and time but I don’t think the solution was so straight forward. In my opinion, I don’t think the outcome of a technical interview should be decide based on just this problem.

Decorator Pattern using Java 8

In this post we are going to look at a couple of examples which use the GoF Decorator pattern. We will then refactor this code to use some of the features introduced in Java 8 which give Java a taste of functional programming. After this refactoring, we will see if the refactored code showcasing the decorator pattern becomes concise and easy to understand or makes it more complicated.

The basic idea behind the decorator pattern is that it allows us to add behavior to an object without affecting or modifying the interface of the class. I would suggest you to read and get a basic idea about the Decorator pattern in case you are completely unaware of it.

I would be considering 2 existing posts showcasing the decorator pattern and then we would be refactoring the code using Java 8.

Typical Decorator – Example one

The first example that we are going to look at is a code sample that I saw from a site, codeofdoom.com, which unfortunately does not exist anymore(domain seems to have expired). That post referred to an example which was simple to read and showed how a typical decorator pattern is implemented. I thought I had understood the pattern but I was mistaken.

The requirement in that example

The requirement that I understood from the code – We need to format a particular text supplied as input. The formatting can be one of several options- we return the text as it is, format the text to upper case, replace a word in the text with another word , concatenation of text with another string or a permutation and combination of the options. I have stated the requirements upfront but usually as developers we are never aware of the requirements upfront and requirements always keep changing. It would not make sense to create separate classes for each permutation – combination. In such situations, the decorator pattern can be extremely useful.

The example mentioned has an interface Text and a basic implementation of the same which returns the text. There are 3 classes AllCaps, StringConcat and ReplaceThisWithThat which are decorators taking the input text and adding specific behavior to it. The advantage of using the decorator pattern here is that we can use and combine 1 or more decorators to achieve formatting of the input text dynamically rather than sub-classing and creating different combinations of the sub-classes.However the implementation of a typical decorator is not so straightforward to understand. I had to debug a little bit to get a good understanding of how the code really decorates the object.

So let’s define an interface Text with a method format as shown below –

public interface Text {
    public String format(String s);
}

The BaseText class simply returns the text as is –

public class BaseText implements Text{

    public String format(String s){
        return s;
    }
}

The TextDecorator class serves as base decorator class which other classes extend. This decorator class is like a core class which helps in combining different functionalities.

public abstract class TextDecorator implements Text {

    private Text text;

    public TextDecorator(Text text) {
        this.text = text;
    }

    public abstract String format(String s);
}

The AllCaps class takes the input and formats it to uppercase –

public class AllCaps extends TextDecorator{

    public AllCaps(Text text){
        super(text);
    }

    public String format(String s){
        return text.format(s).toUpperCase();
    }
}

The StringConcat class calls format and then concatenates the input string –

public class StringConcat extends TextDecorator{

    public StringConcat(Text text){
        super(text);
    }

    public String format(String s){
        return text.format(s).concat(s);
    }
}

And finally, the class which replaces text “this” with “that” –

public class ReplaceThisWithThat extends TextDecorator{

    public ReplaceThisWithThat(Text text){
        super(text);
    }

    public String format(String s){
        return text.format(s).replaceAll("this","that");
    }
}

Test class to run the decorators –

public class TextDecoratorRunner{

    public static void main(String args[]){

        Text baseText = new BaseText();
                
        Text t = new ReplaceThisWithThat(new StringConcat(new AllCaps(baseText)) );

        System.out.println(t.format("this is some random text"));
    }
}

Notice how the call is done. It is actually inside-out but the code is read from the outside-in.

A pixel is worth 1024 bits
Flow of the Decorator pattern

The left hand side shows the calls to the format method in each decorator class and the right hand side shows how the decorators hold references or how they are chained.

The final output from this is – THIS IS SOME RANDOM TEXTthat is some random text.

Refactoring the decorator pattern using Java 8

Well, now to the main task of re-implementing the same using functional style. The BaseText class shown above has a function, format, which simply takes a String and returns a String. This can be represented using functional interface, Function<String,String> – introduced in Java 8.

import java.util.function.Function;

public class BaseText implements Function<String,String>{

    public BaseText(){
    }

    @Override
    public String apply(String t) {
        return t;
    }
}

Now, each decorator class implements a simple functionality of formatting a string in different ways and can be combined as shown below in a single class using static methods.

public class TextDecorators
{

    public static String allCaps(String s){
        return s.toUpperCase();
    }

    public static String concat(String input) {
        return input.concat("this is some random text");
    }

    public static String thisWithWhat(String input) {
        return input.replaceAll("this", "that");
    }

}

This simply leads us to the following –

public class TextDecoratorRunner {

    public static void main(String[] args) {
        String finalString = new BaseText()
                .andThen(TextDecorators::allCaps)
                .andThen(TextDecorators::concat)
                .andThen(TextDecorators::thisWithWhat)
                .apply("this is some random text");

        System.out.println(finalString);
    }
}

The code above does function chaining and uses method reference. It takes the text and passes it through a chain of functions which act as decorators. Each function applies a decoration and passes the decorated string as output to the next function which treats it as an input.  In this scenario, there is no need to have separate classes which act as decorators or the abstract base class. This implementation offers the same flexibility as the typical solution but I think it is also much much easier to understand as compared to the typical way of implementing the decorator pattern. There is also no inside-out or outside-in business when we call the function.

Typical Decorator – Example two

The second example that we are going to consider is shown here groovy decorator.  I would suggest you to read it to get a basic understanding of the use case. Also note the confusion between whether the complete text is logged in upper case or timestamp is logged in lower case.

Decorator pattern refactored

Let us try and use the same concept of function chaining to refactor this.

The code for the Logger interface would look like this –

public interface Logger {
	public void log(String message);
}

The SimpleLogger class –

public class SimpleLogger implements Logger {

	@Override
	public void log(String message) {
		System.out.println(message);

	}
}

The individual decorated logger can be represented as below –

import java.util.Calendar;

public class Loggers {
	
	public static String withTimeStamp(String message){
		
		Calendar now = Calendar.getInstance(); 
		return now.getTime() + ": "+  message;
	}
	
	public static String uppperCase(String message){
		
		return message.toUpperCase();
	}
}

And finally using function chaining –

import java.util.function.Function;
import java.util.stream.Stream;

public class LoggerTest {
	public static void main(String[] args) {

Logger logger = new SimpleLogger();

        Function<String,String> decorators = Stream.<Function<String,String>>of(Loggers::withTimeStamp,Loggers::uppperCase)
                .reduce(Function.identity(),Function::andThen);

        logger.log(decorators.apply("G'day Mate"));

	}
}

The code above looks a little daunting at first glance but it is actually simple, let us break it down –

Lines 9- 12 are the key to understanding this. I am using Stream.of and passing method references which are individual decorators.

  1. To combine them or to chain them, we use the Stream.of. The Function<String,String> in Stream.<Function<String,String>>of  is more of a compilation thing , to represent the Stream that the output is a Function<String,String>.  Using the Stream.of, we are forming a chain of functions.
  2. Now to the reduce part, the reduce part chains all of them together to form a single function. What do we want to do with each function as they come out from the stream ? We want to combine them, this is done using the andThen, that is exactly done in the 2nd parameter to the reduce function.  The first parameter to the reduce method is an initial value, this is the identity function – given a function, just return the function. Every reduction needs a starting point. In this case, the starting point is the function itself.
  3. The chaining yields a Function<String,String> decorators.
  4. We just call the apply method to this using the parameter G’day Mate which passes it through the chain of functions( decorators) and finally sends that to the logger.log method.

This version of the decorator pattern is easier to understand, we simply took a message, added the timestamp to it , converted it to upper case and logged it.The Stream.of and the methods references being passed to it might seem difficult at the first read, but trust me it is more of a new way of solving the problem and an easier one in my opinion.

Conclusion

We took 2 examples of the decorator pattern but both examples have clarified that there is certainly an easier and better way to implement the decorator pattern. In both cases the decorator pattern using the functional style is definitely more concise and easier to understand. Code that is easier to understand is always easy to maintain. Can we safely conclude that we probably don’t need the typical decorator pattern anymore?

Well, in the examples that we considered, we had just one method or one functionality to decorate, the format method in the first example and log method in the second example. In these scenarios, the functional style definitely seems better and there is no need to go about creating different classes, the abstract class and then use the cryptic way of calling them. But what if we had more than a single method to decorate ? This is something that needs to be looked at and I will try and answer that in another post.

Integration testing using Testcontainers in a Spring Boot 2, JPA application

In this post we will be looking at how to perform integration testing of the repository layer using Testcontainer. We will be setting up Testcontainer in a Spring Boot 2 application, use JPA for persistence and PostgreSQL as database.Besides this we will also be using JUnit5 framework.

Why Integration testing of the database ?

In any software application, it is a known fact that data is of utmost importance. In a web application, this data will usually be handled at the database layer. In an application which uses JPA for persistence, the repository and the JPA entities are major components of the database layer.

To ensure that the repository layer and the entities are functioning appropriately, we should write unit tests. Unit tests for the same are usually written against an in memory database. This is definitely required. The only catch here – Should we wait until production to ensure that the database layer is behaving as expected against our actual type of our database as against the in memory database ?

What would be even better is if we can do integration testing against a similar database that we use in production environment – MySQL,Oracle,PostgreSQL etc. But wouldn’t setting this up be difficult and laborious for just integration testing ?

Enter Testcontainers

Using Testcontainers, we can actually start a containerized instance of the database to test our database layer without going through the pain of actually setting it up on every developer’s machine. Testcontainer is a Java library that provides disposable and lightweight instances of databases. It actually does a lot more that just that. You can read about it here.

How does it help us ?

Using JUnit tests against the actual type of our production database will give us feedback before we actually go into production.This means we can worry less about our database layer and breathe a little easy. There will always be other issues in production,NullPointerExceptions in our code that we can worry about! Since Testcontainers spawns a containerized environment for us to run our tests, we need to have Docker installed.

Getting started with the code

We will be using Spring Boot 2.3.0 along with Spring Data JPA, JUnit 5 to run the tests,Flyway to create tables and TestContainer version 1.14.1.

Snippet of the pom.xml
<!-- The PostgreSQL db driver -->        
<dependency>
	<groupId>org.postgresql</groupId>
	<artifactId>postgresql</artifactId>
	<scope>test</scope>
</dependency> 
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>testcontainers</artifactId>
	<version>${testcontainer.version}</version>
	<scope>test</scope>	
</dependency>
<!-- test container support for postgre module -->
<dependency>
        <groupId>org.testcontainers</groupId>
        <artifactId>postgresql</artifactId>
        <version>${testcontainer.version}</version>
        <scope>test</scope>
</dependency>
<!-- JUnit 5 integration with testcontainer -->
<dependency>
	<groupId>org.testcontainers</groupId>
	<artifactId>junit-jupiter</artifactId>
	<version>${testcontainer.version}</version>
	<scope>test</scope>
</dependency>
The application.properties file
spring.jpa.hibernate.ddl-auto=none
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQL94Dialect
spring.jpa.show-sql=true

The spring.jpa.hibernate.ddl-auto has been set to none because we are using flyway to create our database and tables. The initialization script can be found under the /db/migration folder.

Starting the containerized PostgreSQL database and tying it with Spring
public abstract class AbstractContainerBaseTest {

	protected static final PostgreSQLContainer postgres;

	static {
		postgres = new PostgreSQLContainer("postgres:9.6.15")
										 .withDatabaseName("socialmediasite")
										 .withUsername("postgres")
										 .withPassword("password");
		//Mapped port can only be obtained after container is started.
		postgres.start();
	}
	
	static class PropertiesInitializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
		public void initialize(ConfigurableApplicationContext configurableApplicationContext) {			
			TestPropertyValues
					.of("spring.datasource.driver-class-name=" + postgres.getDriverClassName(),
							"spring.datasource.url=" + postgres.getJdbcUrl(),
							"spring.datasource.username=" + postgres.getUsername(),
							"spring.datasource.password=" + postgres.getPassword())
					.applyTo(configurableApplicationContext.getEnvironment());
		}
	}
}

The main purpose of this abstract class is to start the container using Testcontainer library. All other test classes will extend this class .The static block creates the container and this instance will be shared by all test classes. There is a manual call to start method. The library manages to shut it down automatically.

The PropertiesInitializer class is needed because we need to access this containerized database through the Spring Data JPA.

Initialization of Testcontainer,PostgreSQL, Spring Boot,Spring Data JPA
Verifying the output
PostgreSQL container created and closed by Testcontainer

I am using the docker ps –format command to list the containers started by Testcontainer library. As shown in the image above, one can see the container having the name competent_raman which is the PostgreSQL container.For all the test cases this is the single instance of the container. There is also this additional image quay.io/testcontainers/ryuk:0.2.3 – this is created by the Testcontainer library which ensures all other containers are shut down and cleaned when the JVM shuts down.

Database and tables in the containerized PostgreSQL database.

As you can see in the image above, I am using the docker exec -it command to connect to the same containerized instance, (competent_raman) of the PostgreSQL database. Then using the command psql -h 192.168.99.100 -p 32813, we are connected to the dockerized database. The -p indicates the port number and -h the host. The \c socialmediasite command connects us to the database and finally the \d command is used to list all the tables.

Once all the tests are executed, the Testcontainer library removes the instances of the containers that it created.

The JPA entities SocialMediaSite and User used in the application have a One-Many relationship.

Conclusion

Writing unit tests against an in-memory database like H2 is certainly helpful but writing integration tests against actual type of production database takes us to the next level – more confidence, fewer failures in production. The combination of Testcontainers, Spring Boot 2, JUnit5 makes it easy for us to get started on this journey. You can find the entire source code here.

Understanding @SpringBootApplication

The objective of this post is to get an understanding of what @SpringBootApplication annotation does in a Spring Boot application. Sometimes we just tend to add an annotation, everything works magically and we are happy.

But when we want to make a few changes, things start breaking, components are not found and then we blame the magic that we enjoyed earlier.So it is always a good idea to get a better understanding of what some of these annotations do.

In a Spring Boot application, we need a class with a main method annotated with @SpringBootApplication annotation as shown below:

package com.boot.jpa;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class SpringBootJpaApplication {

	public static void main(String[] args) {
		SpringApplication.run(SpringBootJpaApplication.class, args);
	}

}

This is the starting point for a Spring Boot application. The @SpringBootApplication annotation is really a short hand annotation or a combination of the following annotations:

1. @SpringBootConfiguration

This signifies that the application is not just a normal Spring application but a Spring Boot application. Actually this is just a combination of @Configuration+ Spring Boot, so more of a specialization. @Configuration applied on a file signifies that the class contains spring bean configurations using @Bean.

2. @EnableAutoConfiguration

This is the super intelligent annotation which kicks in the auto configuration for Spring Boot. Auto configuration is done by inspecting the classpath and configuring necessary beans that may be required. If you were to provide your own configuration, then Spring Boot will not re-configure the bean again.

How is this auto configuration done ?

Auto Configuration is done via a SpringFactoriesLoader class which will pick up and read the META-INF/spring.factories file.  It is like a hook into instantiating certain beans found on the classpath. A sample of the same is shown below. EnableAutoConfiguration class is the ‘key’ below and it can have many ‘values’ as classes. These are read and conditionally configured depending on what is already configured by the developer and what is found on the classpath. For example, only if RabbitMQ and Spring AMQP client are found on the classpath usually added via maven or gradle, then Spring Boot tries to configure RabbitMQ.

# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\
org.springframework.boot.autoconfigure.data.jpa.JpaRepositoriesAutoConfiguration,\
....

3. @ComponentScan

Any Spring application will usually consist of beans which will be managed by Spring framework. To tell Spring framework to manage a bean, we use @ComponentScan. This annotation will scan all your beans and register it with the Spring Application Context. Later, you could inject the beans using @Autowired. Examples of a Component are classes annotated with @Controller, @Repository, @Service etc.

One has to be careful as @ComponentScan will scan the packages below the package from where this is defined.

A better way to organize your packages would be to have the  class containing the @SpringBootApplication in the root package of your application so that all sub packages get scanned and beans are registered with application context. If this is not possible for some reason, you can of course add @ComponentScan annotation and specify the base package(s) to scan for spring beans.

Conclusion

Annotations in the Spring or Spring Boot framework makes the life of a developer easy and increases productivity.However it is important to understand the ‘what and how’ of these annotations to be a more effective and efficient developer.

Java AutoCloseable : How does it work?

Most of us know that AutoCloseable is an interface that a class needs to implement to ensure that resources are automatically released. Let us consider a very simple example:

public class MyResource implements AutoCloseable{
    
	public void openConnection(){
		System.out.println("Opening expensive resource...");
	}
	
	public static void main(String args[]) throws Exception{
		
		MyResource myResource = new MyResource();
		myResource.openConnection();
	}

	@Override
	public void close() throws Exception {
		System.out.println("Releasing expensive resource...");		
	}
}

The output from this is simply :

Opening expensive resource…

The close method was never called.

AutoCloseable – Automatic Resource Management ?

We kind of assumed that implementing the AutoCloseable interface will magically do everything for us. Isn’t this feature also called Automatic Resource Management ? I think the automatic word can be a little misleading here.May be that is why this feature is also called try-with-resources. Well, as a programmer, I do forget things.

What we have clearly forgotten is to wrap the resource in a try block. If you read the java docs (which can be a bit boring at times), it is clearly mentioned that the resource needs to be wrapped in a try block. But I am sure we don’t really read the java docs that carefully. Most IDE’s now days show us a warning that the resource is never closed. Having said that, let us take a look at the byte code

public static void main(java.lang.String[]) throws java.lang.Exception;
Code:
0: new #5 // class blog/MyResource
3: dup
4: invokespecial #6 // Method “<init>”:()V
7: astore_1
8: aload_1
9: invokevirtual #7 // Method openConnection:()V
12: return

On line 9 , there is a  invokeVirtual call to openConnection method and really nothing about the call to close method. There is a return after a call to the openConnection method.

Try block to the rescue

Let us go ahead and wrap the resource in a try block.

public class MyResource implements AutoCloseable{
    
	public void openConnection(){
		System.out.println("Opening expensive resource...");
	}
	
	public static void main(String args[]) throws Exception{
	
		try(MyResource myResource = new MyResource()){
			myResource.openConnection();	
		}
	}

	@Override
	public void close() throws Exception {
		System.out.println("Releasing expensive resource...");		
	}
}

This time the output is as expected:

Opening expensive resource…
Releasing expensive resource…

So putting a try around the resource did invoke the close method. But to understand this, let us take a look at the byte code:

public static void main(java.lang.String[]) throws java.lang.Exception;
Code:
0: new #5 // class blog/MyResource
3: dup
4: invokespecial #6 // Method “<init>”:()V
7: astore_1
8: aconst_null
9: astore_2
10: aload_1
11: invokevirtual #7 // Method openConnection:()V
14: aload_1
15: ifnull 85
18: aload_2
19: ifnull 38
22: aload_1
23: invokevirtual #8 // Method close:()V
26: goto 85
29: astore_3
30: aload_2
31: aload_3

See line number 23 above. The compiler inserts the call to the close method after the openConnection method. This is done using invokeVirtual instruction.

Conclusion

If you implement the AutoCloseable interface and have an expensive resource which should be closed, remember to wrap the resource in a try block.

Testing JPA entities using Spring Boot 2, JUnit 5 and Java 14

In this article we will be looking at how to get started with testing JPA entities and the repository layer using Spring Boot 2.2 , JUnit 5 and Java 14. I have written a similar post here which uses Spring 1.5, JUnit 4 and Java 8. As the changes are significant I decided to keep them separate instead of updating that post.

I will be focusing mostly about the changes I had to make to the code to upgrade the libraries mentioned above. You can find the complete source code on github here.

Let us consider the same example of One-Many relation between SocialMediaSite and a User.

The @DataJpaTest

This remains the key ingredient behind running the JPA related tests in Spring Boot. @DataJpaTest disables full auto configuration and applies configuration related to JPA tests only. This concept hasn’t changed between Spring Boot 1.5 and 2.x. Now, let’s take a look at the areas where the code needed modifications.

Changes to the pom.xml

Upgrading the spring boot version to 2.2.6

<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>2.2.6.RELEASE</version>
		<relativePath /> <!-- lookup parent from repository -->
</parent>

Upgrading java version to 14

<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
		<java.version>14</java.version>
	</properties>

Upgrading the dependency section to exclude JUnit 4

The spring-boot-starter-test dependency includes the vintage JUnit 4 and JUnit 5 dependencies.Since we will be using JUnit 5, we will exclude the vintage one.

<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
			<exclusions>
				<exclusion>
					<groupId>org.junit.vintage</groupId>
					<artifactId>junit-vintage-engine</artifactId>
				</exclusion>
			</exclusions>
		</dependency>

Junit 5 consists of 3 different sub projects – The JUnit engine , JUnit vintage ( Junit 3 and 4) and JUnit Jupiter( JUnit5). All JUnit 5 annotations reside in org.junit.jupiter.api package.

Support for preview features in Java 14

<plugin>
	<artifactId>maven-compiler-plugin</artifactId>
	<configuration>
		<release>14</release>
		<compilerArgs>
		    <arg>--enable-preview</arg>
		 </compilerArgs>
					 
                <forceJavacCompilerUse>true</forceJavacCompilerUse>
		  <parameters>true</parameters>
	</configuration>
</plugin>

<plugin>
	<artifactId>maven-surefire-plugin</artifactId>
	<configuration>
		<argLine>--enable-preview</argLine>
	</configuration>
</plugin>
Use of the Record type

Record type has been added as a preview feature in Java 14. The EmailAddress class has now been modified as a Record type. This was a value object before.

public record EmailAddress(String emailAddress) {

	public static final Pattern VALID_EMAIL_ADDRESS_REGEX = Pattern.compile("^[A-Z0-9._%+-]+@[A-Z0-9.-]+\\.[A-Z]{2,6}$",
			Pattern.CASE_INSENSITIVE);

	public EmailAddress {
		Assert.isTrue(isValid(emailAddress), "Email Address is Invalid !");
	}

	private static boolean isValid(String emailAddress) {
		return emailAddress != null ? VALID_EMAIL_ADDRESS_REGEX.matcher(emailAddress).matches() : false;
	}

}
Changes to the Test classes
  1. There is no need to add the @RunWith(SpringRunner.class) annotation anymore.
  2. The @Before annotation has been now replaced with @BeforeEach to perform any kind of setup before each test. Similarly the @After has been replaced with @AfterEach.
  3. The @Rule and @ExpectedException had been removed from JUnit5. The test classes have been refactored to reflect the same. The assertThrows and assertEquals method from the JUnit5 library have been used instead.
@Test
public void testShouldReturnExceptionForInvalidEmailAddress() {
		
	var exception = assertThrows(IllegalArgumentException.class, () -> new EmailAddress("test@.com"));
	assertEquals(exception.getMessage(), "Email Address is Invalid !");
}

The assertThrows method takes a functional interface, Executable as second parameter. We pass the block of code that needs to be executed to Executable.

Conclusion

The @DataJpaTest helps us to test the JPA entities and the repository layer. This is also called as test slicing as we are testing only the JPA components.The @DataJpaTest annotation contains @ExtendWith(SpringExtension.class) which integrates the Spring Testing framework with JUnit 5 model.

We have not explored many features of JUnit5 in this example but the combination of Spring Boot’s testing features and JUnit5 does make a powerful combination.

Switch on enums in Java 14

In this article we will be taking a look at switch statements and expressions on enums. If you are new to the switch expressions which was introduced in Java 14, take a look at my previous blog here.

Let us consider an Ecommerce application which maintains the status of an order. Let’s keep this simple and consider that the order can be in one of the following states – Placed,Confirmed,Cancelled,Delivered. This can be represented by using an enum as shown below:

public enum OrderStatus {
    PLACED,
    CONFIRMED,
    CANCELLED,
    DELIVERED
}

An order class contains this OrderStatus as shown below:

public class Order {
    private OrderStatus orderStatus;

    public void setOrderStatus(OrderStatus orderStatus) {
        this.orderStatus = orderStatus;
    }

    public OrderStatus getOrderStatus() {
        return orderStatus;
    }
}
Switch statement and enum

Consider a processOrder method which makes use of a switch statement as shown below:

private void processOrder(OrderStatus orderStatus) {
        String status = "";
        switch (orderStatus) {
            case PLACED, CONFIRMED, DELIVERED:
                status = "Success";
                break;
            case CANCELLED:
                status = "Fail";
                break;
        }
        //Further processing with status.
}

This code will compile without any problem. There is however one problem with this code. If we were to add a new state, (let us call it RETURNED) to the OrderStatus enum but forget to handle this in all switch cases,the code will silently fail as this case is not handled. The usual practice is to add a default case above so that the code does not silently fail without any indication.With a default case, the code can be modified to throw an error indicating to us that there is a change in the enum which we need to take care of.

   private void processOrder(OrderStatus orderStatus) {
        String status = "";
        switch (orderStatus) {
            case PLACED, CONFIRMED, DELIVERED:
                status = "Success";
                break;
            case CANCELLED:
                status = "Fail";
                break;
            default:
                throw new IllegalArgumentException("Invalid order status : " + orderStatus);
        }
        //Further processing with status.
    }
Switch expressions and enum

If we refactor the processOrder method using switch expressions,the code would look like this:

private void processOrder(OrderStatus orderStatus) {
        String status = switch (orderStatus) {
            case PLACED, CONFIRMED, DELIVERED -> "Success";
            case CANCELLED -> "Fail";
        };
        //Further processing with status.
}

If we were to introduce a new status RETURNED in the OrderStatus enum, the compiler immediately flags an error that the switch expression does not cover all the possible input values. We don’t need to write a default case that throws an Exception to handle this scenario.

private void processOrder(OrderStatus orderStatus) {
        String status = switch (orderStatus) {
            case PLACED, CONFIRMED, DELIVERED -> "Success";
            case CANCELLED -> "Fail";
        };
        //Further processing with status.
}

So in case of switch expressions on enums, the case blocks must cover all possible values.

Switch expression and default block

In case of switch expressions on an enum, the compiler inserts a implicit default block if the code does not already have one.This is extremely useful since it would indicate a change in the enum defintion. The default block is inserted as shown below: ( snippet from the .class file)

private void processOrder(OrderStatus orderStatus) {
        String var1;
        switch(orderStatus) {
        case PLACED:
        case CONFIRMED:
        case DELIVERED:
            var1 = "Success";
            break;
        case CANCELLED:
        case RETURNED:
            var1 = "Fail";
            break;
        default:
            throw new IncompatibleClassChangeError();
        }
    }

The default case is inserted if we don’t write one.

Conclusion

With the introduction of switch expressions in Java 14, code using enums in switch expressions is more robust. The compiler checks if all possible values have been handled and then inserts an implicit default case in case we don’t write one .

If the code containing the switch is not compiled and the enum is changed, the IncompatibleClassChangeError will signal an error. This feature will definitely avoid the silent failures that we have seen in switch statement using enums.

Switch expressions in Java 14

In this article we will be taking a look at switch expressions that have been incorporated in JDK 14 which was released on 17th March 2020.Switch expressions were made available as a preview language feature in JDK 12 and 13. Based on the feedback, changes were made and now this feature has been finalized in JDK 14.Let us start with a simple example of a switch block:

public int getNumberOfRooms(int budget) {
        int numberOfRooms = 0;
        switch (budget) {
            case 1000000:
                numberOfRooms = 1;
                break;
            case 2000000:
            case 3000000:
                numberOfRooms = 2;
                break;
            case 4000000:
            case 5000000:
                numberOfRooms = 3;
                break;
            default:
                System.out.println("Hello");
        }
        return numberOfRooms;
}
Vintage switch

The method above takes in a budget as parameter and depending on the budget it returns the number of rooms that one can buy with the budget. For simplicity, let us assume that we have a flat budget. 1 million can get us a single room. 2 and 3 million can get us 2 rooms and finally 4 and 5 million can get us 3 rooms.

Some observations about the usage of switch statement in the example above:

  1. Most usage of switch statements do some processing and return a value.
  2. Most case blocks have a break statement to avoid fall through otherwise it will execute the next case block till it encounters a break – it is quite easy to forget a break and well, adding a break serves the purpose but can make the code difficult to read at times. As experienced Java programmers, we are now used to it.
  3. The default case is optional and it does not enforce the programmer to throw an exception or return a value. I could add a System.out.println there and the code will compile and run.
  4. Multiple cases which require common logic to be processed have separate case labels on different lines with no break in between them which takes an effort to read and understand – Again we are used to it but imagine folks being introduced to the language for the first time.
The new modern switch
public int getNumberOfRooms(int budget) {

        int numberOfRooms = switch (budget) {
            case 1000000 -> 1;
            case 2000000, 3000000 -> 2;
            case 4000000, 5000000 -> 3;
            default -> throw new IllegalStateException("No rooms available for this budget");
        };
        return numberOfRooms;
}

  1. The new switch assigns/returns a value to a variable on the left side, hence called switch expression. Notice the semicolon on line 8 at the end of the switch block.
  2. Multiple cases are combined on the same line with a comma.
  3. The right hand side is separated with arrow (->) instead of a colon.
  4. No break in between cases, there is no fall through by default.
  5. The default case has to either return a value or throw an exception.
  6. Last but not the least and in fact an important difference is that this code is definitely more pleasing to the the eye and easier to understand.
Yield and scope in the modern switch

Let us introduce a small change in the business requirement. If we have a budget of 3 million or 5 million we need to give the customer a discount.Doing this using the vintage switch would lead to the following code:

public int getNumberOfRooms(int budget) {
        int numberOfRooms = 0;
        switch (budget) {
            case 1000000:
                numberOfRooms = 1;
                break;
            case 2000000:
            case 3000000:
                int finalPrice = (budget == 3000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);
                numberOfRooms = 2;
                break;
            case 4000000:
            case 5000000:
                int finalPriceThreeRooms = (budget == 5000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPriceThreeRooms);
                numberOfRooms = 3;
                break;
            default:
                throw new IllegalStateException("No rooms available for this budget");
        }
        return numberOfRooms;
}

If you see line number 9 and line number 15 , we introduced 2 variables which does the same thing but we had to give it different names. This is because any variable introduced in case block stretches for the entire switch block.

Let us implement this using modern switch construct

public int getNumberOfRooms(int budget) {
        int numberOfRooms = switch (budget) {
            case 1000000 -> 1;
            case 2000000, 3000000 -> {
                int finalPrice = (budget == 3000000) ? budget - 100000 : budget);
                System.out.println("Final price after discount " + finalPrice);

                yield 2;
            }
            case 4000000, 5000000 -> {
                int finalPrice = (budget == 5000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);

                yield 3;
            }
            default -> throw new IllegalStateException("Unexpected value: " + budget);
        };
        return numberOfRooms;
}

Using the modern switch, if a case block has multiple lines, we make use of a yield statement which indicates that the case block yields a value. Besides this, another salient feature is that the finalPrice variable introduced inside the case block remains local to that block which makes reading and understanding code easier. I have seen lots of code where variables are named using weird combinations like temp1, temp2, temp3 or finalPrice1,finalPrice2,finalPrice3 …This change does not enforce you to give meaningful names to variables but signals a clear intention about the scope of a variable in the case block.

More on the yield statement

The yield statement was introduced after seeking feedback. In JDK 12, break statement was used in switch expression which was confusing. It was hence decided to introduce the yield statement in switch expressions. I have been calling the switch in versions of JDK prior to 12 as vintage but they continue to exist along with the modern switch.

Some nuances with mixing and matching
  • We cannot use the arrow and the colon in the same switch block. We get a compile time error “Different case kinds used in switch”
public int getNumberOfRooms(int budget) {
int numberOfRooms = 0 ;
switch (budget) {
case 1000000 : numberOfRooms = 1;
case 2000000,3000000 -> {
  …
}
case 4000000, 5000000 -> { 
  …
}
default -> throw new IllegalStateException("Unexpected value: " + budget);
};
return numberOfRooms;
}
  • In case of the switch statement, we can use the new case label using the arrow. Doing this means there is no fall through and multiple cases can be combined using comma. You can add a break statement( no compiler error) but the arrow case label means there is no fall through
public int getNumberOfRooms(int budget) {
        int numberOfRooms;
        switch (budget) {
            case 1000000 -> {
                numberOfRooms = 1;
            }
            case 2000000, 3000000 -> {
                int finalPrice = (budget == 3000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);
                numberOfRooms = 2;
                //redundant because of arrow.
                break;
            }
            case 4000000, 5000000 -> {
                int finalPrice = (budget == 5000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);
                numberOfRooms = 3;
            }
            default -> throw new IllegalStateException("Unexpected value: " + budget);
        }
        return numberOfRooms;
    }
  • In case of a switch expression, we can use the old vintage style of using a colon for the case blocks. But since this is an expression, each case block must return a value using yield.
public int getNumberOfRooms(int budget) {
        int numberOfRooms = switch (budget) {
            case 1000000: {
                yield 1;
            }
            case 2000000, 3000000: {
                int finalPrice = (budget == 3000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);
                yield 2;
            }
            case 4000000, 5000000: {
                int finalPrice = (budget == 5000000) ? budget - 100000 : budget;
                System.out.println("Final price after discount " + finalPrice);
                yield 3;
            }
            default:
                throw new IllegalStateException("Unexpected value: " + budget);
        };
        return numberOfRooms;
    }

Conclusion

The new switch expression feature will make our code more readable, intuitive and less error prone. This feature acts as a foundation for more changes to come in the Java language, one of them is pattern matching.

References

https://openjdk.java.net/jeps/361

Java 8 default methods – Basic introduction

Let us assume that you have designed, published a library and you have many clients who use your library. You have followed the “Object Oriented” concepts and hence made use of interfaces, abstract classes, util classes to design your library.

After a few years, you realize that the core library is quite old, they need to move ahead with time and hence you would like to add new features to this library. At the same time you do not want your existing clients knocking at your door at 2:00 am in the morning to complain about a new piece of functionality you added which broke their existing code.

What are the possible solutions to this?

  1. Have a new interface define these methods, introduce an abstract class  to implement the interface and provide some skeletal behavior for the new functionality. This is actually a design technique that has been used in the Collection framework – The problem here is that we have abstract classes which mean that the existing client cannot extend from more than 1 class. 
  2. Classes with static methods – The problem here is that the clients cannot override the functionality if they would like to. This means clients will have to use the single implementation which does not really give them flexibility.

So how do we achieve all of the following?

  1. Add new functionality to an old library.
  2. Not break existing client code.
  3. Allow clients to inherit behavior and not state
  4. Allow clients to override behavior if needed

The answer to this is the default methods. A default method can be directly added to an interface with an implementation. Yes, you read that right! Interfaces with methods and not just the declaration but definition as well.

Some examples in the Java library which use the the default method:

1) stream method in the Collection interface – Enables any collection to be streamed.

    default Stream<E> stream() {
        return ...;
    }

2) reversed method in the Comparator interface – Enables reversing of the comparator

    default Comparator<T> reversed() {
        return Collections.reverseOrder(this);
    }

3) forEach and spliterator in the Iterable interface

One purpose of the default methods was to make changes to the Collection framework without breaking other libraries that use Collection API, like the Apache CollectionUtils or Hibernate and at the same time provide us with new functionality.

We all know that an interface can extend from another interface in Java.  By adding default methods, things get a little complicated and hence there are some things to remember when using default methods:

In the code below, Parent interface has default methods, Child extends from Parent but does not provide implementation.

Output:

Method one…

Method two…

Conclusion : Default methods are inherited in the Child interface.

In the next example below :

Parent interface has default method, Child extends from Parent and does provide or override implementation.

Output:

Parent: Method one…

Child: Method two…

Conclusion:The most specific/ the one at the bottom in the interface hierarchy wins!

What if the ConcreteClass class also provided the implementation for the methodTwo() above ?

Output:

Parent: Method one…

ConcreteClass: Method two…

Conclusion : Class wins over interface hierarchy!

What happens when we have same default methods in 2 “unrelated” interface and 1 concrete class implementing the 2 interfaces

Output: The java compiler complains that the method callMe is being inherited from types One and Two. Well, it is our responsibility to resolve the same. This can be done in the following way:

Depending on the implementation of the default method, we can either call the version One or the Two. This is done by using the super keyword. Without using the super keyword, it would mean we are calling the static method on the interface which is not the case here.  At the byte code level , this translates to a invokespecial instruction.

To summarize

  1. Interface hierarchy – the most specific wins
  2. Classes always win in case there is a class that implements a default method and there is an interface hierarchy.
  3. If there is still a problem, it is the responsibility of the developer to resolve it as shown above.

So, does this mean that we do not need abstract classes anymore? Well, that is not right.  What clearly distinguishes the two is state and the fact that a class can inherit only 1 class

So interfaces are always a better idea unless we need state or a skeletal implementation.

Understanding JavaScript hoisting in function declaration and function expression

In my previous blog, I introduced you to one of the main differences between function declaration and function expression in JavaScript. This was called hoisting and we saw that function declaration gets hoisted but function expression does not. This concept of hoisting can be understood as something getting pulled right on top in it’s respective scope. This understanding is good enough at a conceptual level but there is a lot more going on behind the scenes.

Before we embark on the details, an important concept to understand is the execution context of a variable or a function in JavaScript. When a variable is declared inside a function, the execution context for that variable is the function in which it is declared. If it is declared outside, the the execution context is the global context. This is true for functions as well.

The execution context can be either the global code, code inside a function or any eval code. A call to a function results in the formation of a new execution (functional) context.

var taxRatePercentage = 10;       //global context

function calculateTax(amount){    //new execution context - functional context
    var totalTax;

    function deductInsuranceAmount(){   //new execution context when it is called - functional context
        var insuranceAmount;

    }
}

One of the important things to understand here is that the function deductInsuranceAmount can access the totalTax variable but the calculateTax function cannot access the insuranceAmount variable.

There has to be a mechanism through which the JavaScript engine keeps track of the variables, their values and functions in a particular execution context. This is done by maintaining a lexical environment.

So a couple of things are in play here with respect to a execution context:

  1. The lexical environment
  2. The scope chain – The reason why the deductInsuranceAmount function is able to access the totalTax variable defined in the outer/parent function.

Note – To keep it simple and make it easier to comprehend the lexical environment, I am going to leave out the discussion about scope chain in this blog. However the outer reference mentioned in the tables below will give you and idea about the same.

When a function is called, it is this lexical environment that maintains the parameters passed to the function , the variables declared inside the function and the function declarations for a particular execution context.

For the code below :

var taxRatePercentage = 10;       //global context

function calculateTax(amount){    //new execution context
    var totalTax = 0;
    ....
    }
}
calculateTax(20000);

The lexical environment would look like this:

amount20000
totalTax0
outerglobalEnvironment
Simplified Lexical environment for calculateTax function

And the global environment like this:

taxPercentage10
outernull
Simplified environment for the code snippet above.

Internally, the lexical environment actually is split into 2 parts :

  1. The environment record which contains the totalTax, amount declarations as shown above.
  2. The outer link

Two phases of the execution context-

Now with this brief understanding of the lexical environment, let us turn our attention to the execution context. To reiterate, when a function is called, an execution context is created. However there are 2 phases involved with respect to the execution context. The 2 phases are :

  1. When the function is called
  2. When the function is actually executed.

These 2 phases affect the lexical environment. Let us see how.

function calculateTax(amount){    //new execution context
    var totalTax = 0;
    ....
    }
}
calculateTax(20000);

When the calculateTax is called, step 1 above, the execution context is created. Note that our function is not yet being executed. The JavaScript engine creates a lexical environment for this execution context and does the following:

amount20000
totalTaxundefined
outerglobalEnvironment
Simplified Lexical environment during creation phase of the execution context- when function is called but not executed

The parameters passed to the function are initialized with appropriate values but the totalTax variable declared inside remains ‘undefined’ during the creation phase. During phase 2(execution phase), the code is being executed and the environment record is read by the JavaScript engine and then modified as an when necessary.

amount20000
totalTax0
outerglobalEnvironment
Lexical environment when code/function is actually executed.

If we had a function expression inside this function :

function calculateTax(amount){    //new execution context - functional context
    var totalTax = 0;
    calculateInsuranceAmount();
    var calculateInsuranceAmount = function deductInsuranceAmount(){
        ...
    }
}
calculateTax(20000);

The corresponding lexical environment when the function calculateTax is called, during the creation phase would like this :

amount20000
totalTax undefined
calculateInsuranceAmount undefined
outerglobalEnvironment
Lexical environment when function is called but not yet executed.

The code above will result in a error, it will complain that calculateInsuranceAmount is not a function. This is because the function expression is not hoisted, the variable which refers to the function is initialized to undefined during the creation phase of the execution context. During the execution phase, this environment record in the lexical context will be referred to and the error will be thrown.

But if we have a function declaration and the call like this :

sayHello();

function sayHello(){
  console.log('hello');
}

When the sayHello is called, remember the 2 phases. During phase 1, the lexical environment is created but it contains a reference to the actual function in memory in case of function declaration.

sayHelloReference to function sayHello in memory
outerglobalEnvironment
Lexical environment during creation phase when sayHello is called but not executed.

When the code is actually executed, sayHello is present in this lexical environment and it points to the actual function in memory.

If we were to contrast this with function expression :

sayHello();

var sayHello = function(){
  console.log('hello');
}
//Throws 
"error"
"TypeError: sayHello is not a function

This is because, during the creation phase, the lexical environment will look like this:

sayHelloundefined
outerglobalEnvironment
Function expression in lexical environment when the function is called but not executed

This behavior is due the fact that the function expression will be initialized at code execution stage.

This is what happens in case of variables (var) too :

function printToConsole(){
  console.log(x);
  var x = 20;
}

printToConsole();

This results in undefined because during creation phase the variable x has undefined value and during the execution phase, we try to access the value on line 1 above.The lexical environment is looked up and it prints undefined.

So in step 1 : (During context creation phase, the variable is created )

xundefined
outerglobalEnvironment
x in the environment record is undefined during creation phase. The outer environment is the global environment itself.

Step 2 : The code starts executing, tries to access x by referring to the lexical environment and finds it as undefined above.

If we had it like this :

function printToConsole(){
  var x = 20;
  console.log(x);
}

printToConsole();

In Step 1: (During context creation phase, the variable is created )

xundefined
outerglobalEnvironment
Creation phase – The lexical environment for the printToConsole function above.

When the code starts executing , the interpreter executes var x = 20, it looks up at the lexical environment, sees an x there and modifies the value of x to look like the following :

x20
outer globalEnvironment
During execution phase, the variable x is found in the lexical environment and it modifies it to set appropriate value.

Now when the code comes to line 3, it can clearly see the value of 20 assigned to the variable x. This is hoisting from a behind the scenes perspective. This is what happens in case of function declaration, function expression.

Confusion between Lexical and Variable Environment

The specification uses both lexical and variable environment when it addresses the execution context. I have referred to the lexical environment in my explanations above. You can refer to the specification here to get a better understanding of the same. I would also urge you to read an excellent article by Dmitry Soshnikov to get a deeper understanding of the same.

My understanding is that the Lexical Environment was introduced for let,const declarations and for with,catch statements which creates block scope. Another link which explain the difference between the two.

Conclusion

Every function executes in a new execution context. An execution context among other things has a lexical environment which tracks the variables, the values of those variables and the functions. An execution context has 2 phases, creation phase and the execution phase. The lexical environment is created during the creation phase and then modified during execution phase. This gives us a feeling of hoisting of variables and functions declarations.

Further Reading

If you want to take your JavaScript skills to the next level, Toptal has a set of Q&A here to help you get started.