0

Stop/Remove All Docker containers

Stop / remove all Docker containers

Hints to use with docker.

In the version 1.13.x and higher :

Remove all unused containers, volumes, networks and images (both dangling and unreferenced).

docker system prune

Link : doc docker

Removes all stopped containers.

docker container prune

Link : doc docker

Hack and hints

There are many ways to stop/remove all Docker containers.

 On Unix/Linux :

docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)

One-liner:

docker rm -f $(docker ps -a -q)

For All images :

docker rmi $(docker images -q)

Remove all containers and volumes :

docker rm -v $(docker ps -a -q)

Stop faster docker images :

docker ps -a -q | xargs -n 1 -P 8 -I {} docker stop {}

 Windows

FOR /f "tokens=*" %i IN ('docker ps -a -q') DO docker rm %i

Powershell

docker rm @(docker ps -aq)

Link :

1

SonarQube and ReactJS

This article is showing you how to use SonarQube with ReactJS and its JSX files. I will use both SonarQube JavaScript plugin and the additional plugin Sonar EsLint plugin.

<%- toc(str, [options]) %>

For the people who has missed my previous article, I have created a new SonarQube plugin to extends the Javascript analysis.

Installation and Configuration

The first step is to download the plugin directly from Github here.

Download the plugin

Download the plugin

Find the latest release.

Find the latest release

Find the latest release

Copy it in your Sonar extension folder.

Copy the plugin

Copy the plugin

Restart the server

Restart the server by calling the commands (here on linux)

sonarqube-6.0 ./bin/linux-x86-64/sonar.sh stop
  Stopping SonarQube...
  Waiting for SonarQube to exit...
  Stopped SonarQube.
  ➜  sonarqube-6.0 ./bin/linux-x86-64/sonar.sh start

 Enabling custom rules in SonarQube

Don’t forget to modify your SonarQube profile to enable the new ESLint rules :

Add the ESLint rules to your SonarQube profile

Add the ESLint rules to your SonarQube profile

Enable the ESLint rules to your SonarQube profile

Enable the ESLint rules to your SonarQube profile

Preparing your project

 Handling SonarQube Scanner

Most projects requires the SonarQube scanner (Wiki Link to analysis Javascript. Download it somewhere on your disk and unzip it.

Creates a file sonar-project.properties̀ into your project.

Copy-paste this content and modify it :

sonar.projectKey=sleroy:reactjs-demo
sonar.projectName=ReactJS demo
sonar.projectVersion=1.0
sonar.sources=src
sonar.sourceEncoding=UTF-8
sonar.javascript.file.suffixes=.js,.jsx

Don’t forget the line sonar.javascript.file.suffixes=.js,.jsx, it’s the hack to make SonarQube working on JSX files!

OK! SonarQube Scanner is configured!

Preparing ESLint

We want to perform the SonarQube analysis with the additional results of ESLint. Eslint is a popular linter that provides recent rules for many javascript frameworks – ReactJS included.

ESLint is thereby often upgraded and contains through its extension system, rules and frameworks that you won’t find in the regular SonarQube installation.

If you haven’t created yet an ESLint configuration file, here is the commands :

ESLint Configuration

ESLint Configuration

You can try the configuration by launching ESLint ony your project. It may warn you that some extensions are missing. Install them with NPM or Yarn.

Missing NPM Module

Missing NPM Module

Usually, the ReactJS extension is missing of your project. You can add them like developer extensions (--save-dev) or globally (-g).

Install missing ESLINT ReactJS extension

Install missing ESLINT ReactJS extension

With the right configuration and ESLint installation, the scan of a JSX File should work :

Scanning JSX File

Scanning JSX File

OK! ESLint is configured!

 Launching SonarQube Scanner

Launchs the SonarQube scanner with the command :

~/tools/sscanner/bin/sonar-scanner

And the analysis is running …

react-jsx git:(master) ✗ ~/tools/sscanner/bin/sonar-scanner
INFO: Scanner configuration file: /home/sleroy/tools/sscanner/conf/sonar-scanner.properties
INFO: Project root configuration file: /home/sleroy/git/react-jsx/sonar-project.properties
INFO: SonarQube Scanner 3.0.3.778
INFO: Java 1.8.0_121 Oracle Corporation (64-bit)
INFO: Linux 4.10.0-21-generic amd64
INFO: User cache: /home/sleroy/.sonar/cache
INFO: Load global repositories
INFO: Load global repositories (done) | time=211ms
INFO: User cache: /home/sleroy/.sonar/cache
INFO: Load plugins index
INFO: Load plugins index (done) | time=14ms
INFO: SonarQube server 6.0
INFO: Default locale: "fr_FR", source code encoding: "UTF-8" (analysis is platform dependent)
INFO: Process project properties
INFO: Load project repositories
INFO: Load project repositories (done) | time=214ms
INFO: Load quality profiles
INFO: Load quality profiles (done) | time=94ms
INFO: Load active rules
INFO: Load active rules (done) | time=897ms
INFO: Publish mode
INFO: -------------  Scan ReactJS demo
INFO: Load server rules
INFO: Load server rules (done) | time=482ms
INFO: Base dir: /home/sleroy/git/react-jsx
INFO: Working dir: /home/sleroy/git/react-jsx/.scannerwork
INFO: Source paths: src
INFO: Source encoding: UTF-8, default locale: fr_FR
INFO: Index files
INFO: 9 files indexed
INFO: Quality profile for js: Sonar way
INFO: Sensor Lines Sensor
INFO: Sensor Lines Sensor (done) | time=41ms
INFO: Sensor SCM Sensor
INFO: SCM provider for this project is: git
INFO: 9 files to be analyzed
INFO: 0/9 files analyzed
WARN: Missing blame information for the following files:
WARN:   * /home/sleroy/git/react-jsx/src/example/hello.jsx
WARN:   * /home/sleroy/git/react-jsx/src/example/index.jsx
WARN:   * /home/sleroy/git/react-jsx/src/fixtures/this.jsx
WARN:   * /home/sleroy/git/react-jsx/src/example/index.js
WARN:   * /home/sleroy/git/react-jsx/src/example/imager.jsx
WARN:   * /home/sleroy/git/react-jsx/src/fixtures/component.jsx
WARN:   * /home/sleroy/git/react-jsx/src/fixtures/advanced.jsx
WARN:   * /home/sleroy/git/react-jsx/src/fixtures/react.jsx
WARN:   * /home/sleroy/git/react-jsx/src/fixtures/hello.jsx
WARN: This may lead to missing/broken features in SonarQube
INFO: Sensor SCM Sensor (done) | time=504ms
INFO: Sensor XmlFileSensor
INFO: Sensor XmlFileSensor (done) | time=1ms
INFO: Sensor JavaScript Squid Sensor
INFO: 9 source files to be analyzed
INFO: 9/9 source files have been analyzed
INFO: Unit Test Coverage Sensor is started
INFO: Integration Test Coverage Sensor is started
INFO: Overall Coverage Sensor is started
INFO: Sensor JavaScript Squid Sensor (done) | time=893ms
INFO: Sensor Linting sensor for Javascript files
INFO: Sensor Linting sensor for Javascript files (done) | time=1438ms
INFO: Sensor Zero Coverage Sensor
INFO: Sensor Zero Coverage Sensor (done) | time=38ms
INFO: Sensor Code Colorizer Sensor
INFO: Sensor Code Colorizer Sensor (done) | time=2ms
INFO: Sensor CPD Block Indexer
INFO: DefaultCpdBlockIndexer is used for js
INFO: Sensor CPD Block Indexer (done) | time=1ms
INFO: Calculating CPD for 2 files
INFO: CPD calculation finished
INFO: Analysis report generated in 170ms, dir size=24 KB
INFO: Analysis reports compressed in 254ms, zip size=18 KB
INFO: Analysis report uploaded in 39ms
INFO: ANALYSIS SUCCESSFUL, you can browse http://localhost:9000/dashboard/index/sleroy:reactjs-demo
INFO: Note that you will be able to access the updated dashboard once the server has processed the submitted analysis report
INFO: More about the report processing at http://localhost:9000/api/ce/task?id=AVwHr7JyDHBkCqlFC7Sx
INFO: Task total time: 8.046 s
INFO: ------------------------------------------------------------------------
INFO: EXECUTION SUCCESS
INFO: ------------------------------------------------------------------------
INFO: Total time: 10.141s
INFO: Final Memory: 48M/301M
INFO: ------------------------------------------------------------------------

Controlling the results

Go to your Sonar interface, and jump directly to the dashboard.

Our project has been analyzed.

SonarQube analysis

SonarQube analysis

We observe that the violation’s number is increasing with the new rules.

SonarQube analysis details

SonarQube analysis details

Hourra, our JSX files are analyzed !

JSX Analysis

JSX Analysis

In this article, we have installed, configured a new plugin to perform better Javascript analysis into SonarQube, working with ReactJS and JSX files.

0

Release of FakeSmtp-junit-runner

Today, I released a new library to help developers to write integration tests with mail servers.

The library has been released on GitHub and Maven Central.

fakesmtp-junit-runner

Build Status

Coverage Status

Links : github.

Important : Part of the source code of this library has been modified and adapted from the project of FakeSmtp. I want to thank him since his project inspired me the creation of that library.

This library is an extension to JUnit to allow developers to write integration tests where a SMTP server is required.

The how-to is quite simple :

  • Inserts the @Rule in your integration tests
  • a Fake SMTP Server will start
  • You can send mails on it
  • You can control the mailbox
  • Write your own assertions to check mails.

Installation

The project requires JUnit 4.11 or higher. It also requires SLF4J API presents in the classpath. I did not bundle them in the library to avoid conflicts.

To use it, adds the library to your maven or gradle config script :

For maven :

<dependency>
  <groupId>com.github.sleroy</groupId>
  <artifactId>fakesmtp-junit-runner</artifactId>
  <version>0.1.1</version>
  <scope>test</scope>
</dependency>

For gradle :

testCompile "com.github.sleroy:fakesmtp-junit-runner:0.1.1"

Usage

Step 1 :

Creates a JUnit test :

public class SmtpSendingClassTest {


  @Test
  public void testCase1() {

  }

}

Step 2 :

Adds the new Junit rule with its configuration :

public class SmtpSendingClassTest {

  @Rule
    public FakeSmtpRule smtpServer = new FakeSmtpRule(ServerConfiguration.create().port(2525).charset("UTF-8"));

  @Test
  public void testCase1() {

  }

}

Step 3 :

You are ready to use it, controls the mailbox or the server state :

Assert.assertTrue(smtpServer.isRunning());
public class SmtpSendingClassTest {

  @Rule
    public FakeSmtpRule smtpServer = new FakeSmtpRule(ServerConfiguration.create().port(2525).charset("UTF-8"));

  @Test
  public void testCase1() {
    Assert.assertTrue(smtpServer.isRunning());
    Assert.assertTrue(smtpServer.mailbox().isEmpty());
  }

}
0

My weekly DZone”s digest #1

This is my first post that offers a digest from a selection of DZone’s articles. I will pick DZone’s article based on my interests.

This week the subjects are : BDD Testing, Bad code, Database Connection Pooling, Kotlin, Enterprise Architecture

A few benefits you get by doing BDD

A few benefits you get by doing BDD : This article is an introduction to the Behaviour Driven Development practice. It’s interesting because we are regularly meeting teams, developers, architectures (pick your favorite one) that are confusing technical details and functionalities. As a result, the design, the tests and the architecture hides the user behaviour (the use cases ?) under a pile of technical stones. This article is a nice introduction. I recommend to go further these articles : * Your boss won’t appreciate tdd, try BDD * BDD Programming Frameworks * Java Framework JBehave.

Gumption Traps: Bad Code

Bad code, how my code...

Bad code, how my code…

Gumption Traps: Bad Code : an article about the bad code and how to deal with it.

{% blockquote Grzegorz Ziemoński%} The first step to avoid the bad code trap is to stop producing such code yourself. When faced with existing bad code,one must work smart to maintain motivation. {% endblockquote %}

This is a good introduction sentence. This week, I had a meeting with a skilled and amazing team. The meeting’s goal was to find a way to find the technical debt. The very technical debt that is ruining the application and undermining the team’s motivation. What I found interesting and refreshing in this article, is the pragmatic tone and the advice.

{% blockquote Grzegorz Ziemoński%} To avoid bad code, try to minimize the amount of newly produced bad code. {% endblockquote %}

How to avoid the depress linked to the bad code ? First of all, I want to say that developers are not receiving enough training on how to improve the code. Usually university / college courses are dedicated about How to use a framework. Therefore, few of them are able to qualify what is a bad code, what are its characteristics and de facto the ways to improve it. To avoid bad code, I try to demonstrate the personal benefits for the developers to improve their skills. Quality is not only a question of money (how much the customer is paying) but rather how much your company is paying attention to your training and personal development.

A lot of developers are overwhelmed under the technical debts without the appropriate tools (mind, technics, theory) to handle it. I try to give them gumptions about the benefits to be a better developer and how to handle the weakness of a sick application. To save a software rather than practicing euthanasia 🙂

Database Connection Pooling in Java With HikariCP

When we are discussing about Database connection pooling, most of my colleagues are relying on the good old Tomcat dbcp. However there is a niche, really funny and interesting, the guys that a competing for the best DBCP. And HikariCP is clearly a step ahead of everyone.

The article Database Connection Pooling in Java With HikariCP is presenting how to use a custom DBCP in your software.

Hikari Performance

Hikari Performance

I think it would have been great to present the differences with the standard DBCP and further debate on the advantages/disadvantages of the solutions. A good idea for a newt article 🙂

Concurrency: Java Futures and Kotlin Coroutines

Java Futures and Kotlin Coroutines An interesting article about how Java Futures and Kotlin co-routines can coexists. Honestly I am a little bit disappointed and thought that Kotlin would make things easier like in Node.JS

Are Code Rules Meant to Be Broken?

Another article about Code Quality and we could be dubious whether exists an answer to that question : Are Code Rules Meant to Be Broken.

I won’t enter too much in the details, the author’s point of view seems to be Code Rules are good if they are respected. If they are broken, it implies that the Code rules need to evolve 🙂 What do you think about it ?

Java vs. Kotlin: First Impressions Using Kotlin for a Commercial Android Project

This article is interesting since it presents a feedback session on using Kotlin in a Android project.

The following big PLUS to use Kotlin are :

  • Null safety through nullable and non-nullable types, safe calls, and safe casts.
  • Extension functions.
  • Higher-order functions / lambda expressions.
  • Data classes.
  • Immutability.
  • Coroutines (added on Kotlin 1.1).
  • Type aliases (added on Kotlin 1.1).

  Quality Code Is Loosely Coupled

Quality Code Is Loosely Coupled

This article is explaining one of the most dangerous side of coding : Coupling. Must to read article despite the lack of schemas.

Five Habits That Help Code Quality

This article is a great introduction on code assessment. These five habits are indeed things to track in your software code as a sign of decay and code sickness.

The habits are : – Write (Useful) Unit Tests – Keep Coupling to a Minimum – Be Mindful of the Principle of Least Astonishment – Minimize Cyclomatic Complexity – Get Names Right

10 Good Excuses for Not Reusing Enterprise Code

This article is really useful in the context of Digital Transformation to assess which softwares you should keep and throw.

Example of excuses : – I didn’t know that code existed. – I don’t know what that code does. – I don’t know how to use that code. – That code is not packaged in a reusable manner.

Test proven design

An interesting article and example on how to improve your own code using different skills. I really recommend to read this article and the next future ones : Test proven design.

1

I have tried Vue.js and I love it

Vue.js Framework

I have tried Vue.js and just love it.

Some weeks ago, I started a new project for which, I have to build an internet website.

Context

After spending really long hours on internet, browsing, collecting every possible testimonials and advices and comparing them to my first impressions, I decided to start with an hybrid / multiple page site.

(if you are interested by the reasons, it will be the subject of another post).

An hybrid /multiple page site is a website where the content is rendered both from server side and client side at the opposition of single page application (SPA) full client side and a classical server side site(PHP..) Since I want to use the power of modern Js Frameworks as double binding, refreshing, Ajax widgets, Es2016, reactive programming and somewhat control which pages needs to be reloaded, I had to make a choice.

The list of choice is somewhat limited if keep only the 5 most popular ones. (yes I am resolutely not a pioneer of the JavaScript Jungle)

The framework selection

I made the following list :

  • Angular 2+ (they are increasing the major version number for each patch 😅)
  • React.js
  • AngularJS
  • Ember. Js
  • Vue.js
  • JQuery (it is a joke)

Selection criteria

I defined some selection criteria besides the popularity :

No code bloat : specifically to JavaScript, the syntax and the missing OOP native programming have been producing many frameworks with dumb syntax without any semantical and often syntaxical meaning. To overcome the limitations, many frameworks are using syntax sugar, making them a nightmare to memorize. The most ridiculous is the attempt to stick on these syntaxical blobs, some pseudo theorical terms.

A good framework should offer different levels of usages from the straightforward approach to build quickly and easily a website with the common use cases to the low-level approach where the experimented developer is able to tune the required details. What has been done in Laravel, Spring framework or Symphony are good samples.

Symphony framework is known as a huge galaxy. Many components, industry quality grade, but an overwhelming complexity if you start head on.

Therefore they have created a micro — framework called Silex to bootstrap an PHP application without the nasty details and it is deadly simple. If you want more complex things, the components behind Silex are the Symphony ones.

For a web framework, always study how do they handle forms. Especially a basic post form. It takes five minutes in plain HTML to build an (unsecured) form. How long does it take with this framework?

The same thing works for **Spring* and Spring boot.

The framework must have a business friendly licence. No doubt, no legal restriction for the future company. (by the way do you know you cannot build weapons software in Java, please stick to the line…)

An extensible / plug-ins architecture. I believe the success of a framework resides in the possibility to enable the necessary functionalities (aka feature toggling) during your project. Authentication, reactive programming, lazy loading, modularity.

The evaluation (aka trolling section)

Based on these selection criteria, here is my evaluation.

Disclaimer: I have a highly respect for the guys who wrote these frameworks and I do not doubt of their outstanding skills. AngularJS

I have experienced projects with AngularJS and I renounced since it is a deprecated technology. Too much code bloat, slow (I should rather say hard-to-tune) and all efforts are concentrated on the new Angular framework. Also, I think I could have a problem with my use case and disabling the AngularJS router.

Angular 2

Angular 2: I have received a training in January and wrote several prototypes since. I have been a huge fan of typescript, angular-cli. I was happy and thinking, they took the best ideas from the other frameworks and build a big melting pot.

Angular : melting pot

In Angular, you will find web components, uses template a la React.js, you have opt-in double binding, directives, modular architecture, lazy loading and so on and so on. But I progressively hate Angular for many details, slowing me down in my developments.

I really dislike their API and concepts to build forms. You have two choices, a template form design and programmatic form design. The first one is almost useless and the second one is deadly cumbersome.

In Angular, they decided to kill HTML and recreate it. How? Case-sensitive attributes and non HTML attributes. You cannot use your normal code editor on it. Beautifier tools not works or partially works. And worse of all, they conceive this awful syntax based on brackets, parens, Well, I think their are huge practitioners of the Brainfuck language.

Brainfuck language

Brainfuck language

The last issue I encountered is with their wish to produce an industrial, scalable (in the sense if I put more developers on my project, I maintain a stable learning and complexity curve). Yes, they provide dependency injection, IOC. But it really increases the learning curve.

 React.js

I really wanted to start with React.js. As far I have studied it, the framework seems full of promises, with some nice pluggable functionalities.

However at the time I began to use it, I received a lot of news. The concern is about the React.js license, the Facebook license (link1, link2, link3).

Since there is a threat for the future business (everything can be considered as a social network after all), I have rejected it.

 Ember.js

I have never tried Ember.js. Based in my readings, the framework is definitely worth of attention to build SPA applications but not for my use case. Note : during the writing of this post, I felt on that link, confirming that maybe I was wrong about ember.js

Vue.js

On Twitter, I am receiving a lot of feedback from happy users of Vue.js and I decided to give a try.

The syntax seems deadly simple.

Here is the brief of my experience :

I did not use vue-cli, I had to create my own packaging to adapt Vue.js to multipage architecture.

Code bloat: the Vue.js framework is really simple and the documentation quite good. The documentation for the plug-in vue-loader is quite good but I really hate the webpack syntax to enable it (rant..)

Learning curve: I did not try the most hard-core functionalities of Vue.js, though I am using vue-loader, a different template renderer (pug), transitions, a little bit components and lazy loading.

My biggest difficulty have been to maintain my js bundle as low as possible by producing chunks.

The second issue has been to understand why creating a view was creating an App and my component below using the render() function. However I think that Vue.js is easier than Angular. 2.

As in the previous example, the syntax is quite straightforward, no need to learn complex concepts to begin with.

The framework is also compatible with Typescript and the logic behind is quite simple.

Vue.js can be extended with several plug-ins and functionalities. I did not try all of them and the fact you are enabling them manually is comforting me in my approach.

Vue.js is not enforcing a particular programming paradigm(IOC, interfaces, Reactive programming, or. RxJS).

The only reproach I could formulate is a little fear about the Vue.js ecosystem. Please integrate existing libraries rather trying to recreate or mimic ReactJS libraries.

In conclusion, both of these frameworks are legitimate and have their lot of practitioners, and I don’t blame it. Vue.js has been my choice and I do not regret it, yet, since it has made my project easy, fun and effective.

I will try to provide more feedback in the following weeks especially on form editing, unkt testing and E2E testing.

Thanks for your attention

0

How migrate from JBoss 5 to 7

This article is part of my working notes on the subject of “How to migrate Web applications running on JBoss AS 5 to the version 7”.

JBoss Application Server – Wildfly

I will go straight to the details though here is some lines about JBoss Server.

From Wikipedia : JBoss Application Server(Now called Wildfly) is an application server authored by JBoss, now developed by Red Hat. WildFly is written in Java, and implements the Java Platform, Enterprise Edition (Java EE) specification. It runs on multiple platforms.

On 20 November 2014, JBoss Application Server was renamed Wildfly.

The product history according Wikipedia is :

  • 5.1 Release 23 May 2009
  • 7.0[10] Release 12 July 2011
  • 7.1 Release February 2012
  • 10.1.0 Release August 2016[20]

The JBoss AS community project has been renamed to the WildFly community project wildfly.org

According this JBoss 5 to 7 in 11 steps, the benefits are :

Processing time decreased by 25% without any code change. Development speed increased in my opinion (it is really hard to measure it) by 50% and we are much more productive (faster server restarts). Memory footprint lowered from 1GB to 512MB. Finally automatic application redeployment finally works! However there is always a price to pay – the migration took us 4 weeks (2 sprints).

Thanks to the presentation from Roberto Cortez, we have a clear picture of the migration.

[slideshare id=54488564&doc=migrationtalesfromjavaee5to7-151028171122-lva1-app6892]

JBOSS 5 Architecture

JBOSS 5 Architecture

JBOSS 7 Architecture

JBOSS 7 Architecture

The checklist

Prepare the checklist

When the PAAS or the Web application server have to be upgraded, several regressions may happen. The team has to pay attention to :

  • Server functionalities and integration : Performance, Security, Logging, Monitoring
  • Server configuration
  • Server deployment configuration
  • Application deployment configuration
  • Server API regressions
  • Application regressions
  • Training and risk management

Server functionalities and integration : between the versions, some functionalities and integrations provided by the server may have evolved, be fixed or simply disappeared.

Server configuration : The way the server has been configured, using scripts, GUI, may have changed, forcing the team to change their configuration files and finding the corresponding new way of doing it.

Server deployment configuration : Your deployment model configuration may have to be upgraded : single node, clustered mode, disaster recovery, high availability, reverse-proxying may behave differently in the new versions.

Application deployment configuration : the way to deploy your web applications may have changed in the new versions (GUI mode to script mode…)

Server API Regressions : usually Web Application servers are implementing a specific JEE API version, Servlet API and so on. These API may have changed causing regressions in your applications.

Application regressions : JBoss is including many components extending the JEE with BPM, Persistence, implementation. It is really important to track your dependencies (using Tattletale or mvn dependencies:tree) and interview your team about possible hacks and fixes to overcome the limits of JEE 5. This kind of workaround is difficult to migrate.

Training and risk management : This kind of migration contains its part of risks and changes. Both can create frictions inside your team of between the IT Team and your Dev teams. To ease the migration, don’t forget to dedicate some time to your teams into training to learn the new features of JAS 7. You will also have to adapt your project management to freeze for a while the features until the migration has been done.

Global checklist

This section is providing a checklist to help developers and managers to evaluate the migration risk of their applications.

 Common issues

Here is a list of common issues during the migration of applications with JBOSS AS.

4.2. Debug Migration Issues 4.2.1. Debug and Resolve Migration Issues 4.2.2. Debug and Resolve ClassNotFoundExceptions and NoClassDefFoundErrors 4.2.3. Find the JBoss Module Dependency 4.2.4. Find the JAR in the Previous Install 4.2.5. Debug and Resolve ClassCastExceptions 4.2.6. Debug and Resolve DuplicateServiceExceptions 4.2.7. Debug and Resolve JBoss Seam Debug Page Errors

  • There is a deadlock when using EJB remoting over SSL. This deadlock is present even in EAP 6.2. We’re now at the point when we have quite a patch set of features backported from WildFly to AS 7.
  • JMS : JBoss Messaging server has been deprecated and the compatibility with a JBOSS AS 5 server is really tough to maintain. Some solutions exists as explained below in the article.

 What is changing ?

Here is a summary of the evolutions between the JBoss AS version 5 and the version 6.

 JBOSS AS 6 changes

Here is the compiled list of modifications including the minor version fixes.

  • [Server functionality, Server API, Application regressions] Module based class loading

In JBoss Enterprise Application Platform 5, the class loading architecture was hierarchical. In JBoss Enterprise Application Platform 6, class loading is based on JBoss Modules. This offers true application isolation, hides server implementation classes, and only loads the classes your application needs. Class loading is concurrent for better performance. Applications written for JBoss Enterprise Application Platform 5 must be modified to specify module dependencies and in some cases, repackage archives.

  • [Server functionality, Server deployment configuration] Domain Management : In JBoss Enterprise Application Platform 6, the server can be run as a standalone server or in a managed domain.
  • [Application Deployment] Deployment Configuration : Standalone Servers and Managed Domains : Boss Enterprise Application Platform 5 used profile based deployment configuration. These profiles were located in the EAP_HOME/server/ directory. Applications often contained multiple configuration files for security, database, resource adapter, and other configurations. In JBoss Enterprise Application Platform 6, deployment configuration is done using one file. JBoss Enterprise Application Platform 5 configuration files must be migrated to the new single configuration file.
  • [Server functionality, Application deployment configuration] Ordering of deployments : Application Platform 5 applications that consist of multiple modules deployed as EARs and use legacy JNDI lookups instead of CDI injection or resource-ref entries may require configuration changes.
  • [Server functionality, Application deployment configuration] Directory Structure and Scripts : As previously mentioned, JBoss Enterprise Application Platform 6 no longer uses profile based deployment configuration, so there is no EAP_HOME/server/ directory.
  • [Server application, Application deployment configuration, Application regression JNDI Lookups JBoss Enterprise Application Platform 6 now uses standardized portable JNDI namespaces.
  • [Server functionality, Server configuration, Application configuration, Application code] : Changing logging dependencies
  • [Server API, Application configuration regressions] Resource adapter configuration : In previous versions of the application server, the resource adapter configuration was defined in a file with a suffix of *-ds.xml. In JBoss Enterprise Application Platform 6, a resource adapter is configured in the server configuration file.
  • [Server libraries] Technologies upgrade : JDK 6, JSF 2, Bean Validation (JSR-303), CDI, EJB 3 (1.1.13)
  • Include mod_cluster
  • Servlet API 3.0
  • Update CL to 2.0.8.GA
  • Update Deployers to 2.0.9.GA
  • Update Javassist to 3.11.0.GA
  • Update JBossWS to 3.2.1.GA
  • Update JBossXB to 2.0.2.Beta3
  • Update JGroups to 2.6.13
  • Update Kernel to 2.0.9.GA
  • Update MC-INT to 2.2.0.Alpha2
  • Update MDR to 2.0.2.GA
  • Update to Entity Manager 3.5 and JPA 2
  • Update to JBoss AOP 2.1.6.GA
  • Update VFS to 2.2.0.Alpha1
  • Upgrade apache-beanutils to 1.8.0
  • Upgrade ha-server-cache-jbc to 2.1.0.GA
  • Upgrade JBoss Cache to 3.2.1.GA
  • Upgrade jboss-common-core to 2.2.16.GA
  • Upgrade jboss-ha-server-cache-jbc to 2.0.1.GA
  • Upgrade JBoss JAXR to 2.0.1
  • Upgrade JBoss LogManager to 1.1.0.GA
  • Upgrade JBoss Security 2.0.4.SP2
  • Upgrade JBossXACML to 2.0.4
  • Upgrade JSF to 2.0.0-RC
  • Upgrade to Java Mail 1.4.2
  • Upgrade to JBossXACML 2.0.3.SP2
  • Upgrade XNIO Metadata to 1.0.1.GA
  • New library JBossWS-CXF
  • library update RestEasy
  • JBoss Messaging JMS & MDB replaced by Hornet MQ
  • New RMI Framework : Remote 3
  • VFS Library update
  • [Server functionalities] new server functionalities : Mod_cluster, JBoss Embedded AS
  • [Application deployment] The legacy pooled invoker has been removed. Applications using the pooled invoker should switch to the JBoss Remoting-based unified invoker, which has been the default detached invoker since 4.2.

 JBOSS AS Release 7

Here is the compiled list of modifications including the minor version fixes.

  • [Server functionality, Application regressions] Security improvement

Unlike previous releases, with AS 7.1, remote access requires secure authentication by default. This includes both managment (native, jmx, etc) and various remote application protocols (ejb, jndi, jms, etc); Added SSL support for the Remoting interfaces.

  • [Application configuration deployment] Management API improvements : All configuration attributes are updatable via the CLI. Direct edits to the XML are not necessary.
  • [Server functionality] Various Administration Console Improvements and Management changes
  • [Server API] Remote Connectivity Added support for remote EJB, JNDI and JMX invocation over JBoss Remoting 3, IIOP, Remote JMS. Three modes for accessing remote EE components using JNDI (Client, Traditional Remote, and Delegated).
  • [Server deployment model] Clustering Enhancements : Standalone Servers and Managed Domains : Numerours fixes in HTTP Session Replication, Clustered Web SSO, EJB Stateful Session Bean Replication, EJB load-balancing and failove, JPA XPC propagation
  • [Server functionality] CLI Regressions jboss-admin.sh renamed to jboss-cli.sh, data-source add" "--pool-name" argument seems to have changed to “–name”.
  • [Server libraries] Technologies upgrade :
  • EJB 3.1 Full – Adds a number of key features, including remote communication, asynchronous method invocation, timers, message-driven beans, and legacy compatibility with EJB 2.
  • CMP 2 – Provides a legacy persistence manager which predates JPA. This is benefical to legacy applications which make use of EJB 2.x Entity Beans.
  • JAX-WS 2.2 – Allows simplified usage of Web Services in the EE platform.
  • JAX-RPC 1.1 – Offers legacy support for older Java EE Web Services applications.
  • JAX-RS 1.1 – Supports the construction of RESTful Web Services using the Java EE platform.
  • JavaMail 1.4 – Allows Java EE applications to send and receive e-mail
  • JCA 1.6 – Provides a mechanism for third parties to provide support for custom data sources, as well as connection pooling and transaction management for database access.
  • JMS 1.1 – Adds advanced messaging support to EE applications.
  • IIOP – Supports interoperablility with other application servers and non-Java CORBA clients.
  • JSR-88 – Allows for managing deployments to a Java EE server in a portable fashion.
  • Update mod_cluster to 1.2.0.Final
  • IronJacamar 1.0.7.Final
  • Upgrade Infinispan to 5.1.0.CR3
  • Upgrade to JBossTS 4.16.1
  • Upgrade jboss-metadata to 7.0.0.Beta33
  • Upgrade JGroups to 3.0.3.Final
  • Upgrade JBoss Marshalling to 1.3.6.GA
  • Upgrade httpcore to 4.1.4
  • Upgrade to JBossWS 4.0.1.GA and Apache CXF 2.4.6
  • Update to classfilewriter 1.0.1
  • Upgrade to JSF 2.1.7
  • Upgrade PicketLink to 2.0.2.Final
  • Upgrade PicketBox to 4.0.7.Final
  • Upgrade commons-beanutils to 1.8.3
  • Upgrade Google Guava to 11.0.2
  • [Server functionalities] new server functionalities : Mod_cluster, JBoss Embedded AS
  • [Application deployment] The legacy pooled invoker has been removed. Applications using the pooled invoker should switch to the JBoss Remoting-based unified invoker, which has been the default detached invoker since 4.2.
  • [Server functionalities] new server functionalities : Mod_cluster, JBoss Embedded AS
  • [Application deployment] The legacy pooled invoker has been removed. Applications using the pooled invoker should switch to the JBoss Remoting-based unified invoker, which has been the default detached invoker since 4.2.

How to migrate : plan and tasks

  • JDBC configuration
  • Classpath references
  • Global Modules Reference
  • JMS migration :

According link, we thought it would be really hard to connect with JMS server based on JBoss 5. It turned out that you have 2 options and both work fine:

  • Start HornetQ server on your own instance and create a bridge to JBoss 5 instance
  • Uses a JMS Bridge to move the existing messages
  • Use Generic JMS adapter: https://github.com/jms-ra/generic-jms-ra

 Application packaging and configuration

  • Repackaging Dependencies and fix the EAR Layout: link
  • Install and configure the JDBC Driver link
  • Update the Resource Adapter Configuration
  • Configure the datasource for Hibernate and JPA : If your application uses JPA and currently bundles the Hibernate JARs, you may want to use the Hibernate that is included with JBoss Enterprise Application Platform 6.

In WildFly 8, a resource adapter is configured in the server configuration file. If you are running in domain mode, the configuration file is the domain/configuration/domain.xml file. If you are running in standalone mode, you will configure the resource adapter in the standalone/configuration/standalone.xml file.

More details there How to migrate from AS5 or AS6 to Wildfly

  • Migration of the shell scripts, integration test scripts, deployment scripts.

Application code and configuration migration

Here is the list of tasks implying some rewriting inside your application code.

  • Migration JEE 5 to JEE 6
  • Upgrade to JPA 2.0
  • Update your SOAP Implementations using JBossWS-CXF
  • Upgrade Hibernate from 3 to 4
  • Replacing JBOSS Cache by Infinispan Cache
  • Configure JAX-RS / Resteasy changes
  • Fix Hibernate’s sequencer
  • Replace JBOSS AOP Interceptors : JBoss AOP was used by the EJB container. However, in AS 7, the EJB container uses a new mechanism. If your application uses JBoss AOP, you need modify your application code as follows.
  • Migrating JNDI : Migrating JNDI namespaces
  • Update the datasources : link
  • Rewriting of your RMI Code : oss 5 and 7 are totally different and this kind of communication will not work.
  • Using CDI instead of plain old Singletons

 Tooling

Here is a list of useful tools to assist you in your migration.

  • IronJacamar : to update your datasource configuration
  • Tattletale : to find the application dependencies
  • Gilder : An application for migrating the configuration of JBoss AS 5-based servers to JBoss AS 7-based servers.
  • Tools and tooling to migrate to JBOSSS 6 : Link
  • Upgrades to newer versions of WildFly or JBoss EAP may be handled using the JBoss Windup migration tool. JBoss Windup migration tool

 References:

0

How I switched my blog from OVH to Google Container Engine

In this short story, I will relate how I migrate my blog personal website from a classic VM instance to Google cloud using Kubernetes, Docker, Nginx.

Onoe of my personal goal was also to have a cloud deployed website without spending any money.


Motivations

Long story made short, I have been using Docker on several projects since one year. I progressively got accustomed with the ease of deployment provided by Docker. The issue ? The day I have launched my blog (on February 2017),for time and cost reasons, I picked an VPS instance from OVH.

Why OVH ? Clearly it is one of the cheapest IAAS provider and quite popular there in France. I have been using it for several projects without any major issues.

OVH has an offer of public cloud OVH Public cloud. However the offer looked immature at that time both in documentation than on reviews. The second reason of my rejection is about cloud adotpion. A lot of experts are turned toward GCloud and AWS. Spending my efforts on OVH would not provide enough visibility at short term, in my job.

To better accompany my colleagues and customers to adopt the cloud , I have decided to eat my own dog food. And among my personal projects, I have decided to migrate first my blog.

And to switch my blog from OVH to Google Cloud (Container Engine).

 Pricing

Here are some interesting articles about pricing and functionalities for the major cloud providers :

Technical situation

My blog is hosted on a VPS server (shared instance on OVH). I have installed on it, Apache 2, some monitoring and security system and Let’s encrypt to obtain a free SSL certificate.

Hexo command line

Hexo command line

My blog is not using the classifical wordpress, I am quite fond of static website generators and more recently of flat/headless CMS.

I am using HexoJS as a CMS. Main features are you are writing your article in Markdown and the blog has to be regenerated to produce the static files, producing quite optimized pages.

Hexo command line

Hexo command line

How to switch from a legacy deployment to the cloud.

These are the explanations how I proceed to migrate this website.

 A) Create my Google Cloud Account

Yes, we have to start from the beginning and I created a new Google Cloud Account. Though it is rather easy to create its account, I have been surprised. It was impossible to for me to pick an individual account.

It’s even in the Google FAQ (FAQ).

{% blockquote By Google FAQ %} I’m located in Europe and would like to try out Google Cloud Platform. Why can’t I select an Individual account when registering? {% endblockquote %}

The reason (thanks EU.. ) is dumb as fuck : In the European Union, Google Cloud Platform services can be used for business purposes only

For information, in Switzerland, the limit is lifted.

Interesting enough, the free trial on Google Cloud has been expanded to 300$ for one year.

B) Discover Google Cloud

Well the UI is easy to manipulate even with this nagging collapsing menu on the right side.

Google Cloud Console

Google Cloud Console

The documentation is quite abundant but I found two major issues :

  • Lack of pictures and schema : most concepts are described with a bunch of words. Fortunately, some very kind people made great presentations (here and here).
  • Copy/Paste from the Kubernetes website : yeah most of the documentation can be found on Kubernetes, logically.
  • Lack of informations and use cases : for some examples as using this damn Ingress. Why people are not providing Gist 🙂

I created a cluster with two VM instances, 0.6GB of RAM and 1 core. Indeed I wanted to play with the load balancing features of Kubernetes.

Create a cluster

Create a cluster

C) Replicate my server configuration as a Docker container

The easiest and funniest part has been to reproduce my server configuration with Docker and to include an evolution. I wanted to switch from Apache 2 to Nginx.

First solution I created. I used a ready-made (and optimized) container image for Nginx and modified my build script to generate the Docker image. The generated website is already integrated into the Docker image.

FROM bringnow/nginx-letsencrypt:latest

RUN mkdir -p /data/nginx/cache
COPY docker/nginx/nginx.conf /etc/nginx/nginx.conf
COPY docker/letsencrypt /etc/letsencrypt
COPY docker/nginx/dhparam /etc/nginx/dhparam
COPY public /etc/nginx/html

I made several tests using the command docker run to check the configuration on my own machine.

docker run --rm -i -t us.gcr.io/sylvainleroy-blog/blog:latest -name nginx

D) How to host my Docker image ?

My second question has been how to store my Docker container ?

Creating my own registry ? Using a Cloud Registry ?

I have used two different container registries in my tests.

First is the Docker Hub.

Docker Hub

Docker Hub

What I appreciate the most with the Docker Hub, is that I can delegate the creation of my Docker images to the Hub by triggering a build from GitHub. The mechanism is quite simple to enable and really convenient. Each modification of my DockerFile is triggering a build to create automatically my Docker image!

Here is a small draw to explain it :

Docker Hub & Builds draw

Docker Hub & Builds draw

And some part of the configuration.

Docker Builds Configuration

However Google Cloud is also offering a container engine and its usage has been redundant. I kept it to use it with CircleCI.

Therefore for the time being, I am storing my Docker container on Google Cloud.

Google Cloud Container Registry

Google Cloud Container Registry

With this kind of command :

gcloud docker -- push us.gcr.io/sylvainleroy-blog/blog:0.1

E) The Cloud migration in itself

Maybe it is one my fancy side, but I have only used the GCloud CLI to perform the operations.

Install Google SDK

Everything go smoothly but don’t forget to install Kubernetes CLI.

gcloud components install kubectl

I had a problem with the CLI. It could not see my new projects (only some part of them) and I had to auth again.

gcloud auth login

And perform a new login to see the update.

Don’t forget to also add your cluster credentials using the GUI instructions (button connect near each cluster).

Google Cluster

Google Cluster

gcloud container clusters get-credentials --zone us-central1-a blog

 Understanding the concepts of Pod, deployment

It took me time to understand what is a deployment and a pod. Using docker and docker-compose I could not attach the concepts.

That is one of my concerns with Kubernetes, some technical terms are poor and does not really help to understand what is behind.

Well, I finally create a deployment, to create two docker instances inside my pod (replica=2). This deployment file is declaring basically that it requires my previous Docker imamge and that I want two copies. The selector and the label mechanism is quite handy.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: blog-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
        role: master
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: us.gcr.io/blog/blog:0.9
        ports:
          - containerPort: 80
            name: http
          - containerPort: 443
            name: https 

I use such commands to create it :

̀kubectl create -f pod-blog.yml

KubeCtl Pod informations

KubeCtl Pod informations

 Automating the generation, docker image building and deployment

I have automated the full cycle of my site generation, docker building and container registry and pod reload using CircleCI.

CircleCI Deployment Schema

CircleCI Deployment Schema

And the good thing is that all these things are free.

 Feedback

After playing during two weeks with it on my spare time, I have the following feedback :

 Rolling Update

The deployment mechanism and how the rolling update is performed are impressive and a time-saver.. Some banks are still using an manual way or semi-automated way like Ansible to deploy their software and the rolling updates are performed awkwardly. Here Kubernetes is deploying on the background the new version, controlling its state (roughly) and if the conditions are met, switching from the old version to the new version. I am using this mechanism to bench my Docker new images and push the new versions.

 Load Balancing mess

I had to struggle a lot to set up my load balancer. Well, not at begin. Kubernetes and GCloud are describing precisely how to set-up a Level-4 LoadBalancer. It takes few lines of YAML and it was fine. However, I had huge difficulties when I decided to switch to TLS and my HTTPS Connection with Let’s encrypt.

I met several difficulties :

  • How to register my SSL certificate on a Docker container tough not deployed ?
  • What the fuck is a NodePort ? The difference with ClusterIP and a LoadBalancer and an Ingress ?
NodePork

NodePork

  • Where should I store my certificate ? in the GCloud configuration or in my NGINX ?
  • Why Ingress is not working with multiple routes ?

To address the following issues, I found the temporary solutions :

  • I am using Certbot/Let’s Encrypt certification using DNS. That way, I can generate my certificates "offline".
  • I am not sure about the definition of what is a NodePort, either I need a LoadBalancer for a single container in my pod or simply open the firewall. These concepts, introduced with Kubernetes are still obscure for me, even after several reading.
  • I took the decision to implement my HTTPS LoadBalancing by modifying my NGINX configuration to store the certificate and rely on a Level 4 LoadBalancer to dispatch the flow.
  • I tried really hard to make Ingress working (the level-7 LB) but even the examples where not working for me (impossible to map the port number 0 error) and really bad documented.

 Persistent volume

The documentation about persistent volumes is not precise in Kubernetes and GCLoud and have important differences between the implementation and Google and even between versions.

You have many possibilities :

  • Use a Persistent Volume, PersistentClaim and attach them to your containers
  • Generating directly a volume from your deployment file

Another issue I have met, my docker container was failing (and the pod itself) because the persistent volume created is never formated.

But why ????

Indeed in your deployment file, you have properties to set the required partition format. But no formating will be performed.

And therefore I had the following next issues :

  • How to mount something unformated ?
  • How to mount something unformated in a container of the pod without using the deployment ?
  • Why is there so few documentation in Google Container Engine (in comparison with Google Compute Engine) ?

The recommended solution is to create an VM instance by HAND using Google Compute Engine, to mount attach the disk to the instance. To mount it manually and trigger the formatting. WTF

If you have a better way to handle the issue, I am really interested!

Conclusion

After a month of deployment, I haven’t spend a buck. My page response time decreased from 3.4s to 2.56s And I am not waking up during the night, the eyes full of horror thinking about how to reinstall the site. I only have a container to push.

I am not using yet the Kubernetes UI and I don’t see yet the necessity. The CLI offers almost everything.

Cleaning a cluster, the pods and deployments requires several steps and maybe could be simplified.

 Pricing

One very important aspect of my project was also to decrease the bill to host the site.

Currently, here is my bill for 1600 visits per month :

  • I have a GitHub private repository (~7$/month)
  • I am using the free tier of CircleCI offering me the usage of a Private GITHub repository and important number of build
  • Docker Hub is free for any number of public repositories and 1 private docker repository.
  • I am using the free tier of Google and I spent 1$ in one month and the bill is shared between my blog and my other projects.
  • I have a cluster of 2 VM for my blog

Compared to my 79€/year for my VPS.

Interesting links

1

How Docker is disrupting Legacy IT Companies

Thanks to its popularity, Docker has disrupted many companies and blurred the silos between Developers and Operations. In this article, based partially on my own experiences, I will depict some of the disruptions that containers have provoked into the IT companies. I wish that this article will depict familiar situations and brings you argument to win the obstacles to the container technology propagation 🙂

Disclaimer : Despite I am quoting Docker quite a lot in this article, it is not an endorsed article. If you have a better alternative, simply replace Docker by another Container Technology, the arguments should still be valid.

If you appreciate this article, please relay or like it.

 Day one : I don’t need to spend four hours to setup my development environment

That is my first day on the job. A brand-new laptop, decent performance and feature. Default Operating system : Windows.

Well, I am a Linux hardcore developer and I spent many efforts to live without Excel, PowerPoint and Outlook. And who knows ? Maybe my customers won’t be on Windows. Therefore, I am wondering how to create my new software development environments ? Node.JS ? Java ? Mobile, each technology is coming with their own tools, servers and configuration.

What are the choices ? The company is fair and provides a decent laptop with sufficient power. However the software installation on the native OS has been blocked. I need to proceed by virtual machines. Virtual machines ? What a cumbersome solution. I need to download ISO or OVA’s and proceed to the installation of my software. What is your virtual machine creation’s strategy ? I made a quick survey to my colleagues, who confirm that they create a virtual machine by customer’s project. And they share their virtual machine like Pokemon. WTF ? Many dozen of gigabytes are transferred through USB3, a tiny hard drive and the team is ready. Well after several hours.

Docker or any similar container technology is offering me a better solution.

These are the following arguments :

  • Reduce your startup time and be more efficient. You can find many docker images to set up a development environment ready to use :

  • Docker image node.js dev : A Dev environment for JS
  • Docker image Ruby Dev, Docker image Ruby Dev2
  • Docker image C/C++ on Linux
  • Docker image Java Dev
  • Docker image PHP Dev

  • Broadcast your software programming best practices by using the same env in your team. As a tech lead, my mission is to make my colleagues better than me. And to reach that goal, I am trying to provide them the best tools, configuration, IDE, and automation to help them in their work. How many times, I had to provide a format style guideline to indent their code ? A syntax checker configuration ? An IDE with the right plugins ? All these issues can be solved by providing my docker image and updating it regularly.

  • The time for for Web IDE software is probably come : Eclipse and Visual Studio, Borland Delphi, such IDE have been used by generations of developers. They all come with the same advantages and drawbacks. Powerful, clever code completion, nice OS integration and notifications, a whole bad of features. Clearly the develop has a great environment to write its software but these solutions do not scale well inside a team. How to share my configuration ? My preferences ? How to share code ? How to communicate ? To create coziness in your team, you will have to rely on a great IT administrator. A magician of the command line, Powershell to setup your OS with the same configuration everywhere, yet able to update it regularly.

My recommendation is to rely on two kind of tools to produce your software : – lightweight code editors such like Atom, Visual Studio Code from Microsoft – Web IDE such like Cloud 9 or CodeEnvy is a great example. Using Eclipse Che, the web rewrite of Eclipse, a Saas IDE with the same lot of features and configuration. The most amazing thing is that this great and complex system can be installed with a Docker one liner.

 Day two : Security everywhere, freedom and performance nowhere

As software specialists, we are well aware of the threats coming from the web applications, unobserved operating systems, data breaches. Our daily duty is to protect our customer data.

The consequence for developers is a matter of rule, our computers are locked. Software installation is double checked by IT, Internet is accessed through a proxy, antivirus, whitelist and so on. Hard disk ciphered and so on. in ch All developments have to be done on Virtual machines. BUT Virtual machines are such a pain to manage. A huge disk space, hard to customize, fairly expensive solutions (licence cost of VMWare to be able to perform VM snapshot on every developer laptop…)

Docker Datacenter

Docker Datacenter

Building Virtual Disk images is a tremendous task for the system administrators : slow to copy, hard to customize, mostly manual installation and snapshots to produce them. Developers does not find the necessary flexibility to adapt to their customer projects.

Docker is offering a neat and efficient way to produce images, thanks to the docker script language. A good mix between automation and the traditional system administrator work of producing shell scripts.

Using Docker images is offering enough security control to system administrator, less efforts to maintain and developers can also provide easily their own images to the Security/Administrator Team for review. But how to store your Docker images. At the present time, I would recommend something like Docker Datacenter to host your company containers on premise.

Day twenty : The typical legacy IT Project

Traditional "legacy" IT Projects features : * a code base * scripts to build the software * some manual test cases * a huge and extensive installation documentation to setup : * the test environment * the production environment * perform the maintenance, the upgrade, the backup of the system * scripts to install the database schema

In practice, most IT projects enforces developers to manually install their development, test, production environments using an out of date documentation, incomplete. How the software team is performing a QA session ? Will they create a brand new test environment with fresh data in a given state and the latest produced software version ? The answer is probably no, definitively no.

Usually, IT teams are relying on a single test server, painstakingly built through the scrum sprints, on a virtual machine. Do you think that I am exaggerating ? Ask to your team, how much time do they need to recreate this environment ? And what if their snapshot is lost or damaged ?

Continuous Dockery, ElectricCloud image property

Continuous Dockery, ElectricCloud image property

Docker is providing several solutions to common IT project problems :

  • Deploying the customer application on the developer laptop : docker images are shared between developers to have access to a debug environment. Docker composer can help the software development team to build ready-to-use.

  • Initializing and populating a database for tests : another solution to execute integration tests is to rely on Docker to build an image, ready-to-use of your data. Start the container, wait the readiness , execute your integration tests and kill the container once used. Such scenario is easy to create with Docker, even with proprietary databases such like Oracle of MSSQL. This article is a good instruction to Docker and test automation.

Deterministic Test Automation

Deterministic Test Automation

  • Multiple target and environment compilation : Developers often need to test their software in different environments, browsers. Docker images also provide a solution to the complexity of a software environment.

Day forty : The void of the production environment

Recently, I have encountered a brilliant developer – although alone – maintaining a messy piece of PHP code. He was not the originator of the project, though, in charge of the project since two years. He told me that the manager offered him a virtual machine with everything on it at the begin of the project, to help him. The same one he is using on it.

Currently, he is struggling with the customer and that software. Both the customer and him have different deployment environments. And the difference of server, languages and frameworks versions is creating a huge mess.

Another project and situation. This IT team has been relying on Ansible (with Puppet it would have been the same situation), to deploy their software in the different environments. Despite the improvements thanks to the automation provided by Ansible, there is always a slight tension when launching the Ansible scripts. Maybe it’s the system entropy, the virtual machine erosion, most likely the reason is that the virtual machine has never been deleted and recreated. Anyway there are some subtile differences between the environments and probably and the Ansible deployments are sometimes failing when new features are shipped.

System erosion

System erosion

With that team, we have reached a common point of view. When should we use Ansible to deploy the software ? We should rather use Ansible to prepare the virtual machines to host Docker, open the firewall, establishing the network routes, program the monitoring and so on. And the software will be shipped as a docker image, copied by Ansible and launched.

Docker/Containers can simplify your software deployments whether you have a private cloud or regular virtual machines. Simply install the docker system on your virtual machines and change your way to ship your software, past the effort, you won’t regret it.

Day eighty : The good old mama’s Software Factory

The last situation where Docker/Containers is really brilliant is when you use Docker inside your software factory.

Docker can be used in a software factory to make your Software factory evolve from a monolithic all-usage but slow and frustrating software factory to a real platform Software Factory As A Service (If you like the term SFAAS, it’s mine 🙂

The main differences between a Software Factory and a SFAAS are the following : – Product owner, Team manager are creating the new Software Factories for their projects directly through a WebUI by picking the technologies, tools they need. – Developers have the possibility to instantiate new environments to build or test the software without any interaction, paper submission and waiting for a round trip between Earth and Mars. – Integration engineers are providing new tools and environments accessible to the projects, if they wish. – Few interactions are necessary between the infrastructure and system administration teams and the software teams. It’s a win-win solution and the IT bottleneck has been removed.

Docker Software Factory : Marcel Birkner

Docker Software Factory : Marcel Birkner

I strongly recommend that Software factories built on the top of containers like these great initiative projects.

Conclusion

If you have read the whole article, I can only say you a Big thank you and I hope you have been able to learn one thing or two. The apparition of containers is really helping developers, ops and I wish that the IT companies fully embrace these technologies to make our profession much funnier and attractive.

0

Hexo plugin : hexo-generator-slideshare

Hi I have developed a rather small plugin for the great static site generator hexo hexo.

If you still don’t know Hexo is a static site generator written in javascript that allows you to build fast blogs without the burden to instantiate a full CMS like WordPress.

This website is powered up by Hexo. Since I am writing sometimes Slideshare presentations, I have decided to build a small plugin to display Slideshare presentations.

The plugin code source is available there Github repo.

To install the plugin, simply write the command inside your Hexo blog folder :

npm install hexo-generator-slideshare --save

And then your site is ready to embed slideshare presentations by adding the following tag :

{% slideshare slideshareID %}

More informations are available on : – npmreadme

1

The disappointing quest for an Headless CMS in 2017

In 2017, this blog is powered by Hexo.js. However I am looking for a replacement since Hexo.JS is lacking of crucial features.

Introduction

TLDR : HexoJS is too limited, I want online post edition!

I have been recently working to replace the technology powering my blog. A major point is that I am disappointed with its theme. I would like to replace it, with a new technology Vue.JS, for which I have already discussed there.

Since I am replacing the whole front-end, I have been using the great plugin hexo-generator-json. However I still have major issues with my assets (stored with the posts) and it is not really compatible with a CDN solution.

The second feature I am missing, is the possibility to edit my post online. I am an user of Medium and I love the mobile application to create and edits my posts as well watching statistics. A thing I did not think at first, is the impossibility to create new posts with Hexo.JS without an computer. Indeed, to generate your site, you have to generate it, using a full Node.JS environment, commiting, pushing on GITHub your modifications, deploy the docker container and so on. A lot of tasks I have mostly automated but yet, I don’t have a CI Environment available for it.

I did not want to switch back to Drupal and WordPress, equals for me as a bloated solution, slow and hard to tune. I wanted a compromise : why not having a NoSQL Database, a light REST backend, an AdminUI and that’s all. At the beginning of this blog, it was my plan to build this backend, but I decided shortly to concentrate on the content, rather than on the code.

Fortunately, the technologies have evolved and I made a list of Headless CMS / API-First CMD and tested them.

Headless CMS, what is it ?

Headless CMS

Headless CMS

I won’t spend too much time in the details, a good description has been done there.

Basically, legacy / traditional CMS are highly coupled solutions where the following components are tied :

  • Database : SQL Databases
  • Backend : PHP or worse
  • Front-End : Templated front end or theme highly coupled with the backend API. Unmodifiable at best, throwable at worst.
  • Separated WS / RPC : External service to access the backend data, not used by the front-end.
  • Admin UI : Bundled Admin UI.

Usually this kind of CMS are stored in one big block called WordPress, Drupal, Joomla and so on.

The good news is that even these famous solutions are evolving to apply the following modern and well-known principles :

  1. Decoupled front-end : CMS frontends should be decoupled. The UI will access to the blog data and content using an REST API. UI for Headless CMS are usually using technologies as Angular, React or Vue.JS
  2. Responsive front-end : Headless CMS enales the possibility to create various UI depending of your devices, smart watch, website, search engine etc.
  3. NoSQL Database : Handling documents and content is the speciality of NoSQL databases allowing to add your own custom fields, categories and organization.
  4. Framework : Such Headless CMS should provide libraries or framework to access to the content and handling the security as NPM modules and so on.
  5. DevOps : Such solution should be dockerized.

 My expectations

I am expecting from an Headless CMS to contains :

  • a REST backend
  • Documented RESTFul Apis
  • a database driver compatible NoSQL
  • a bundled Admin UI accessing by an API to the REST Backend
  • a docker image or docker compose
  • possibility to add custom fields
  • possibility to handle markdown format for the edition
  • Cloud FS Storage for my medias
  • Optimized solution : I don’t want a new wordpress installation
  • Node.JS solution : I want a lightweight solution
  • Self hosted solution : I want to deploy it on Google Cloud.

 Results

The list of experiments and my opinion about it.

Directus : No!

Docker-compose was not running (I used this project). The docker instructions worked for me.

I launched it and soon enough I received a lot of technical alerts wasting my pleasure of a fresh installation.

Directus / Error message

Directus / Error message

My last blocking point, and the reason I have rejected : I did not find any way to create a content category (called table) in the admin UI. Seems to manipulate the SQL Database to create them : no thanks (rant here).

 GetMesh : Meh

Uh Uh, a Java solution to power a small blog : no thanks.

GetMesh

 Drupal and WordPress : Hydra CMS

Too big, too well-known. The REST API is for sure the next security hole of these solutions.

But the reason of my reject, the UI cannot be separated from the backend!! And why would I like a UI embedded in my backend when I want to create a SPA WebSite ?

I will use them when they will have deleted their UI from the installer.

I suggest to call them HydraCMS.

GraphCMS : Hipster$$CMS

GraphCMS

GraphCMS

Looks Great but I want my own self-hosted solution and don’t want to pay for that.

Site here

Ghost : GirlyCMS

Honestly, I had a crush with Ghost. Sexy, a great installer, a great documentation, everything to tempt me like an attractive woman.

The problem is that Ghost has almost everything to charm me but he has an embedded UI!!!

I don’t want an UI, I want to build mine 🙁

Apart from that point, GhostCMS is really great.

Ghost CMS

Ghost CMS

It even has an Slack integration and loves Markdown!!

Ghost CMS Site

Cockpit : Blind CMS

Cockpit CMS

Cockpit CMS

Listed in the Awesome CMS List, Cockpit CMS is a rather small solution.

The good points are :

** Docker is working fine. ** The concepts and architecture are OK. ** Nice AdminUI, I really appreciated the way to create my collections

But what really disappointed me was :

** No documentation (REST and so on). For an developer it’s unusable ** PHP : There is no documentation and the REST API is coded in PHP.. Meh ** Lonesome developer yes he is brave, we should encourage him, but he is freaking alone.

In summary, I think that this project goes in the right direction but took some tough and spiky path. PHP is clearly not the appropriate language for such solution. Compared to an Express server, the amount of work to be delivered is too high. It really needs more contributors (actives) to create a good solution and fill the big documentation blackhole. I cannot help since I don’t want to code again in PHP but the solution could be great.

Site is here

 KeyStoneJS

Well at the first glance, I rejected, could not find any Docker image. Or the few ones were not working. But my first attempt was dumb. KeystoneJS is not an Headless CMS by itself it’s rather an implementation of a CMS, fully customizable to create your own blog!

Powered by Express and Node.JS, two technologies I am particularly fond of!

The site is there

The positive sides of KeyNodeJS :

  • A slick project creator using Yeoman!
  • Modern technologies, I think the best to create a CMS
  • The bundle is containing what I am expecting (AdminUI, REST Backend, NoSQL Database(mongoDB))
  • Fully customizable collections and so on

The negative points are :

  • Maybe too much code to begin with
  • What is the maturity of the base implementation ?
  • How much effort requested to build its own website ?
  • I think I did not find yet a NPM module to build a REST Client

Conclusion

I have rejected most of these solutions.

  • I tried two times an installation and to migrate my data in DirectUS but I gave up. I don’t believe in the concepts.
  • The lack of API Documentation in Cockpit (HTML or a la Swagger) is blocking my attempt to use it and migrate my data. The fact that the solution is developed in PHP is blocking my wish to support them. And I don’t like much PHP REST Backend to be honest.
  • I really love Ghost but I don’t want their UI, I want mine. Otherwise I would have use it.
  • I tried to use Drupal and WordPress, but the requested system resources + the fact I cannot disable the UI are a big NO for me.

The consequence is that I am using KeystoneJS and I hope I won’t have too much work to power a new version of my blog.

Stay tune!

References :

**