miércoles, 5 de septiembre de 2018

Level up logs and ELK - Introduction

Articles index:

    1. Introduction (Everyone)
    2. JSON as logs format (Everyone)
    3. Logging best practices with Logback (Targetting Java DEVs)
    4. Logging cutting-edge practices (Targetting Java DEVs) 
    5. Contract first log generator (Targetting Java DEVs)
    6. ElasticSearch VRR Estimation Strategy (Targetting OPS)
    7. VRR Java + Logback configuration (Targetting OPS)
    8. VRR FileBeat configuration (Targetting OPS)
    9. VRR Logstash configuration and Index templates (Targetting OPS)
    10. VRR Curator configuration (Targetting OPS)
    11. Logstash Grok, JSON Filter and JSON Input performance comparison (Targetting OPS)

       

      Introduction

       

      Why this? Why now?

      This is the result of many years as a developer knowing that there was something called "logs":
      A log is something super important that you cannot change because someone reads them, you cannot read them either because you don't have ssh access to the boxes they are generated in. Write them, but not too much, disk may fill.
      Then learned how to write them, then suffered how to read them (grep) until I knew there was a superexpensive tool that could collect, sort, query and present them for you.
      Then I love them, always thought of them like the ultimate audit tool but still too many colleagues preferred to use database for that sort of functionality.

      I got better at logging like it was a nice story happening in my application, learned also how to correlate logs across multiple services, but I barely managed to create good dashboards.It happened that Splunk was too expensive to buy, and ElasticSearch too expensive to maintain. Infamous years without managing logs happened again.

      Finally I got a job that involved architecture-level monitoring decisions and got the opportunity to develop a logging strategy for ElasticSearch (Splunk and other managed platforms didn't require that much hard thinking as they were providing the know-how and setup time). The strategy I will be developing in the next few articles came as a solution to many common restrictions in all companies I've been around the last decade.

      It is a long story, will try to make it concise, bear with me and, if you belong to that huge 95% of companies that uses ElasticSearch as a supergrep, you'll raise your game.

      Objectives of this series of articles:

      1. Save up to 90% disk space based on VRR (Variable Replication factor and Retention) estimations by playing with replication, retention and custom classification.
          • Differentiate important from redundant information and apply different policies to them.
      2. Log useful information for once, that you will be able to filter, query, plot and alert on.
        • We are covering parameters, structured arguments, and how to avoid grok to parse them.
      3. Save tons of OPS time by using the right tools to empower DEVs to be responsible of their logs.
        •  Let's avoid bothering our heroes with each change in a log line. Minimizing OPS time is paramount.

      Some assumptions:

      • All my examples will orbit around Java applications using SLF4J log framework, backed by Logback.
      • Logs are dumped to files, read by FileBeat, sent to Logstash.
      • Logstash receives the log lines and send them to ElasticSearch after some processing.
      • Kibana as ElasticSearch UI.

      Even if your stack is not 100% identical, I am sure you can apply some bits from here.


      Next:  2 - JSON as logs format


      domingo, 31 de enero de 2016

      Vagrant + libvirt plugin in OpenSuSE Leap 42.1

      First bit, install from https://software.opensuse.org/package/vagrant

      The first command (vagrant reload) that I needed, I find the error:

      WARN: Unresolved specs during Gem::Specification.reset:
            childprocess (>= 0)
            ffi (>= 0.5.0)
            net-ssh (>= 2.6.5)
            rest-client (>= 0)


      sudo gem install ffi -> mkmf.rb can't find header files for ruby at /usr/lib64/ruby/include/ruby.h

      Fuck my life...

      Install ruby2.1-devel using YaST

      sudo gem install ffi -> SUCCESS.
      sudo gem install childprocess -> SUCCESS
      sudo gem install rest-client -> Fuck my life again

      nonsense
      nonsense
      compiling unf.cc
      make: g++: Command not found
      Makefile:204: recipe for target 'unf.o' failed
      make: *** [unf.o] Error 127

      make failed, exit code 2
      nonsense
      nonnsense

      Install gcc-c++ using YaST

      sudo gem install rest-client -> SUCCESS
      sudo gem install net-ssh -> SUCCESS!!

      Let's try again with "vagrant reload"

      nonsense
      nonsense
      Vagrant experienced a version conflict with some installed plugins!
      This usually happens if you recently upgraded Vagrant. As part of the
      upgrade process, some existing plugins are no longer compatible with
      this version of Vagrant. The recommended way to fix this is to remove
      your existing plugins and reinstall them one-by-one. To remove all
      plugins:

          rm -r ~/.vagrant.d/plugins.json ~/.vagrant.d/gems
      nonsense
      nonsense

      This might be related with an old installation I had few minutes ago, if it happens...
      rm -r ~/.vagrant.d/plugins.json ~/.vagrant.d/gems

      And try again with "vagrant reload"

      nonsense
      nonsense
      The provider 'libvirt' could not be found, but was requested to
      back the machine 'all_apps'. Please use a provider that exists.


      This is actually good, you passed stage 1, let's try with stage 2, installing the plugin:

      sudo vagrant plugin install libvirt -> fuck my life, and do it hard.

      The plugin(s) can't be installed due to the version conflicts below.
      This means that the plugins depend on a library version that conflicts
      with other plugins or Vagrant itself, creating an impossible situation
      where Vagrant wouldn't be able to load the plugins.

      You can fix the issue by either removing a conflicting plugin or
      by contacting a plugin author to see if they can address the conflict.

      Vagrant could not find compatible versions for gem "ffi":
        In Gemfile:
          vagrant (= 1.7.4) ruby depends on
            listen (>= 0) ruby depends on
              rb-inotify (>= 0.9) ruby depends on
                ffi (>= 0.5.0) ruby

          vagrant (= 1.7.4) ruby depends on
            childprocess (>= 0) ruby depends on
              ffi (>= 1.0.11, ~> 1.0) ruby

          libvirt (>= 0) ruby depends on
            ffi (~> 0.6.3) ruby


      Apparently, we're requiring ffi >= 0.5.0, and ffi >=1.0.11, and ffi = 0.6.3. Obviously it's complicated to fulfil all requirements at the same time. IMHO, libvirt version is the guilty.

      From here, I had to read a bit more about the same problem:
      https://github.com/mitchellh/vagrant/issues/3897

      So we try:
      vagrant plugin install vagrant-libvirt

      Red is getting redder


      Bundler, the underlying system Vagrant uses to install plugins,
      reported an error. The error is shown below. These errors are usually                                             
      caused by misconfigured plugin installations or transient network                                                 
      issues. The error from Bundler is:                                                                                

      An error occurred while installing nokogiri (1.6.7.2), and Bundler cannot continue.                               
      Make sure that `gem install nokogiri -v '1.6.7.2'` succeeds before bundling.                                      

      Warning: this Gemfile contains multiple primary sources. Using `source` more than once without a block is a security risk, and may result in installing unexpected gems. To resolve this warning, use a block to indicate which gems should come from the secondary source. To upgrade this warning to an error, run `bundle config disable_multisource true`.Gem::Ext::BuildError: ERROR: Failed to build gem native extension.                                           

          /usr/bin/ruby.ruby2.1 extconf.rb                                                                              
      checking if the C compiler accepts ... yes                                                                        
      Building nokogiri using packaged libraries.                                                                       
      Using mini_portile version 2.0.0                                                                                  
      Static linking is disabled.                                                                                       
      checking for gzdopen() in -lz... no                                                                               
      zlib is missing; necessary for building libxml2                                                                   
      *** extconf.rb failed ***                                                                                         
      Could not create Makefile due to some reason, probably lack of necessary                                          
      libraries and/or headers.  Check the mkmf.log file for more details.  You may                                     
      need configuration options.                                                                                       

      Provided configuration options:                                                                                   
              --curdir
              --ruby=/usr/bin/ruby.ruby2.1
              --help
              --clean
              --use-system-libraries
              --enable-static
              --disable-static
              --with-zlib-dir
              --without-zlib-dir
              --with-zlib-include
              --without-zlib-include=${zlib-dir}/include
              --with-zlib-lib
              --without-zlib-lib=${zlib-dir}/lib64
              --enable-cross-build
              --disable-cross-build

      extconf failed, exit code 1

      Gem files will remain installed in /root/.vagrant.d/gems/gems/nokogiri-1.6.7.2 for inspection.
      Results logged to /root/.vagrant.d/gems/extensions/x86_64-linux/2.1.0/nokogiri-1.6.7.2/gem_make.out


      Within all this nonsense you can see "zlib is missing; necessary for building libxml2
      "
      Use YaST once again... zlib-devel

      Now again.. 
      sudo gem install nokogiri -v '1.6.7.2' -> SUCCESS

      Finally...
      sudo vagrant plugin install vagrant-libvirt -> RED AGAIN

      Nonsense
      Make sure that `gem install ruby-libvirt -v '0.6.0'` succeeds before bundling.
      Nonsense
      extconf.rb:73:in `<main>': libvirt library not found in default locations (RuntimeError)
      Nonsense


      Maybe... it'd be a good idea to install libvirt and libvirt-devel in my system using YaST again...

      vagrant plugin install vagrant-libvirt -> Installed the plugin 'vagrant-libvirt (0.0.32)'! (Fuck YEAH!)

      Summary:
      Install with zypper, YaST or software.opensuse.org: vagrant ruby2.1-devel gcc-c++ zlib-devel and all its dependencies.
      Install following gems with "sudo gem install" childprocess, ffi, net-ssh and rest-client
      Install the right plugin, without sudo, "vagrant plugin install vagrant-libvirt"

      martes, 8 de diciembre de 2015

      Low cost software organization for open source projects: Allocation and provision.

      This is a quite difficult topic to explain, as this problem is solved differently if you are in a medium sized company with a decent IT department, or you're sitting alone in your sofa and all your infrastructure is a microserver at the corridor. And I'm writing this from my sofa for tiny companies or personal projects.

      To be honest, there's little point to analyse Allocation (of vms) and Provision (of software) separately, as we will find solutions that brings both aspects integrated or at least highly oriented to one another.

      Allocation 

      Requirements for allocation:

      • It can deal with both cloud and on-premise infrastructure.
      • Command line interface available. 
      • Low effort in installation and maintenance.
      • Low overhead, complex solution will require several boxes running the orchestration.
      Non requirements, but worth to highlight that they are not.
      • Autoscaling is not required.

      Candidates:

      • AWS ECS (Elastic Container Service) or EB (Elastic Beanstalk)
        •  Obviously this doesn't fulfil the main requirement, if we used this solution, we wouldn't be able to reuse it on-premise.
      • Docker
        •  Docker has many facets, and among them, a remote-api call might simulate allocation of resources. Not sure if same interface works from AWS.
      • Vagrant
        •  Perfect tool (or at least the most suitable I know) for the job, as it offers allocation with more than a handful of backends, among them, AWS, libvirt, and docker itself. It also offers a plugins system for hooking different provisioners. 
      • Manual
        • It's an option, isn't it? You could allocate resources manually from both cloud and on-premise hardware.
      • Terraform
        • I haven't used this tool, but I think you should if you're still facing the same problem that I am. It support several frontends, but I haven't really found if libvirt is supported.

       

      Provision

      Requirements for provision:

      • It must protect sensitive information, it can't be exposed in an open-source repository.
      • Easy to manage / change / expand.
      • Linux platform is the only one required.
      • Not incompatible with allocation system.

      Candidates:

      • Chef
        • I'm clearly biased towards this provisioner, as it offers free in-cloud service for small number of users (not only chef-solo), it's easily integrated to Vagrant and you end up working in a programming language.
        • Secrets are managed in data bags and encrypted data bags, stored in your computer (chef-solo) or in the cloud (chef-server).
        • Ruby based.
      • Ansible
        •  I have less experience with Ansible than with Chef, bear that in mind.
        • Serverless, like chef-solo
        • Yml oriented, it can be extended with Python.
        • Secrets are encrypted and stored with the rest of the configuration data, somewhere in your computer.
      • Puppet
        • Even less experience with puppet, but for some reason, I don't know anybody that uses it.
        • I can only recommend some reading if you are unfamiliar with Chef, Ansible and Puppet, just in case Puppet is better, but I cannot recommend the unknown.
      • Docker
        • What's docker doing here? Well, it might perfectly be consider a provisioning engine as long as you're actually automating a manual installation of software in a virtual environment.
        • Secrets need, however, to be managed externally, or be sure you're pushing to your private repository.
      • Docker-compos
        • A even more beautiful way of relating software in our boxes, specially if, like me, you think that consolidating different microservices in one box is going to make you save money.
        • Again secrets needs to be managed externally.
      • Manual
        • No.

       

      Best options:

      The one chosen by me (examples coming soon):

      • Vagrant + Chef (cloud) + Docker
        • Vagrant + libvirt for my microserver (allocation < 30 seconds)
        • Same Vagrant + aws for the cloud (not yet explored)
        • Chef installs docker, pulls the right image, and configures it to start the containers on start.
        • Docker contains the software with some placeholders for environment variables containing passwords and sensitive data (therefore docker images are public).

      Adopt (other options I'd use):

      • Vagrant + Chef or Puppet
        • WebUI management of nodes, configuration and secrets
        • Push and pull modes available
      • Vagrant + Ansible
        • Less help, more manual

      Assess (Investigate before adopting):

      • Terraform +  Chef or Puppet or Ansible
      •  Docker as platform + Chef or Puppet or Ansible

      Hold (Don't bother):

      • Anything + Manual
        • You'll fill tired soon

      Not considered for being too big or complex for small projects:

      • Cloudfoundry
      • Kubernetes
      • Mesos

      domingo, 6 de diciembre de 2015

      Low cost software organization for open source projects: Source Code Version Control System

      There's no way you can think of developing any product and not use any Control Version System at all, too many benefits at a really low cost.

      Requirements for our CVS would be as follow:
      • High Availability
      • Backup
      • Public repositories for our open source projects.
      • Private repositories for (maybe) some private configuration data.
      • Accessible 
      • Git, it's the industry standard right now and it'll cover your software requirements 99.99% of times.
      And the possible options come in two different buckets
      • Self-Hosted:
        • Seriously do you think a tiny company can support the complexity (and maybe cost if you use the cloud) of backing up a hosted repository and keeping it open to anywhere else?
      As a service:
      • Github:
        • Popular and ticks almost all the requirements, except for free private repositories. I'm using github for my opensource project, yes.
      • BitBucket:
        • Ticks all the boxes as well, plus a number of free private repositories, enough for some sensitive information we might want to store. I'm using bitbucket as well for my Jenkins configuration auto-backup.
      So, as first step in our new low cost organization for open source projects, I'd recommend BitBucket with the information I have today.

      viernes, 4 de diciembre de 2015

      Low cost software organization for open source projects

      This is an effort I've been willing to do for long time, but it's now when I think I have something like a solution.

      I'll be creating an entry per topic to cover if you want ideas about how to create not your application, as I assume you know your stuff, but everything around your project that makes it a solution.

      My own game will be my example throughout these stages, its name is "Cabo Trafalgar" and I've talked about it enough already, there's a link above if you're interested on a 3D sailing simulation made in Java.

      The topics I expect to cover are as follow:
      1. Code repository
      2. Allocation and provision
      3. Continuous integration
      4. Installation of software / Configuration management
      5. Deployment and platforms
      6. Continuous delivery
      7. Logging
      8. Monitoring

      And the criteria we're going to prioritize
      1. Low price, free when possible.
      2. Secure, configuration values safe and far from code.
      3. Stateless / Easy to recover / Easy to reproduce.
      As a result for this process, you should be able to have an enterprise-quality / almost production ready solution running for really little money for your small project.

      jueves, 13 de agosto de 2015

      Nifty-flow explanation

      My first open-source collaboration!


      When I started "Cabo Trafalgar", it took me three tries to find the technology that made it possible. I started with Irrlicht, and I also took a look to another library I cannot recall anymore. Only when I discovered that the language that paid my bills was really able to render native 3d and really nice water I adventured on JMonkeyEngine with Java.

      Once you start with JME, you don't have many options for creating rich graphic interfaces, and the one that highlights is, needless to say, nifty-gui. Easy to use (I find easier to make examples using JME3 than raw nifty-gui libraries, that easy!) and good documentation and examples. But no doubt it was my webby background I absolutely missed some features:
      •  When you're defining a screen, you're pretty much defining its connections, in my words, "flow of screens". That actually reduces by a lot the possible re-usability of screens in other parts of your game, or other games whatsoever.
      • You can create screens by static xml, or by dynamic java, but web developers have long gone over these restrictions, and we have like a dozen template languages to mix code and static text. At the end of the day, xml for screen is nicer and easier to interpret than crude java code (no offense).
      • Remember, when enumerate, always use 3, 7 or 10 bullet dots, force it if necessary :P
      At this point, I had little options, I could have dropped my project, but I didn't feel like doing it again. I could have renegotiated my use cases to make it more cumbersome to manage repeated screens in different parts of the game, or... I could actually build what I need on top of nifty-gui.

      For first time I was actually solving my own problems instead of waiting for somebody else to make that for me, and it didn't take me long to realize that it was actually possible.

      The use case:

      For my game, I needed the user to walk one or two screens, one after the other, until find the "hall", a screen that contains every mode the player can play. In my case, I have a "CounterClock" game and a "Windtunnel" game, both accessible from the same "Menu" screen.

      Once you choose "CounterClock", user must "Select profile", then "Select ship", then "Select map" and finally "Select controls", then play.
      Once you choose "Windtunnel", user must "Select ship", then "Select controls", then play.

      It's easy to spot that I really wish to reuse some screens, and the flow of screens will be linear most of the times in as many examples I can think.

      Implementation:

      Just take the base of Struts, you have a central controller that will receive every interaction with the user. This controller has, somewhere, the capacity to decide what are the user's options given the current status, and using the current input, calculate the next screen to render.

      That exact idea made me build the RedirectorScreenController, all it does is redirecting the user to the next calculated screen, from the ScreenFlowManager. My screens will always onNext and onPrevious actually direct to this RedirectorScreenController and it's associated empty screen.

      Next, as I'm talking about reusability, I need to differentiate between "screen definition" and "screen instantiation", and bear in mind that the relationship will be 1:N.
      My implementation defines a "screen definition" (See ScreenDefinition) as:
      1. a unique name
      2. a way of getting the controller live instance (unique in the system, so far)
      3. a way of getting the screen constructor (a class that actually knows whether the screen is xml or java and executes it)
      My implementation defines a "screen instantiation" (See Screen) as:
      1. the flow it belongs to
      2. a name, unique within the flow
      3. the live instance of the controller associated to this screen.
      4. the live instance of the generator associated to this screen.
      Every screen has a uniqueScreenId, made from the flow name and the screen name.
      In case the flow name is not set, we can safely assume we need to search that screen in our current flow (local search).

      Finally, this theoretical system was actually becoming alive, we just need to define flows.
      1. A name for the flow, unique
      2. A sequence of screen definition names
      3. An optional screenUniqueId parent
      We need an initial flow, that's the flow without a parent, we can only have one, the ball is rolling!

      If we declare more flows, we need to "hang" them from a concrete existing screen, either from the root flow, or another flow.

      We can always query the ScreenFlowManager (See ScreenFlowManagerImpl for implementation details) for our options, it'll tell us whether we can continue forward, and if other flows are available from this screen.

      Internally, a JGrapht (in-memory graph database) instance have all the information to know where are you, what are your options and what's your next state from current state and input.

      User input

      Maybe it's a bit annoying, but there is at least another element to cover. How do we communicate the ScreenFlowManager what's our next move?

      I only managed to solve this problem by "injecting" the same ScreenFlowManager into the controllers (not even automagically) and allow a "setNextScreenHint" method to tell the flow manager what's your intention.

      Instance resolution

      I gently let this for the end.
      Do you remember I mentioned "a way of getting the controller live instance" when I was talking about ScreenDefinition?. I was ambiguous because this library is quite technology agnostic, so much, that you have to provide a way of telling me how to get your live instances.

      The easier example, LiveInstanceResolutor, it's an object you feed with "instance name" and "instance", you give it to me, and every time you mention the instance name, nifty-flow will take the instance from the resolutor. It's a f*****g map.

      StaticScreenGeneratorResolutor will take your static xml directly, that work is done already.

      Nifty-gui provides, however, other resolutor, DefaultInstanceResolutor, nothing more than a "resolutor of resolutors" so you can use several at the same time if you wanted. Find in the example below how you can assign a prefix to later use in your Screen Definitions.

      The origin of this mechanism is because I cannot live without Spring, so every time I needed an instance, I'm actually invoking a resolutor that is digging inside the BeanFactory for the right bean, autowired, resourced, initialised and ready to use. This implementation is not provided to avoid Spring dependencies, I'm sorry ;)

      How's that looking so far?

      public interface ScreenFlowManager {
          String NEXT = "next";
          String PREV = "prev";
          String POP = "pop";
      
          void addScreenDefinition(ScreenDefinition screenDefinition) throws InstanceResolutionException;
      
          void addFlowDefinition(String flowName, final Optional<String> screenNameFrom, List<String> flowDefinition);
      
          String nextScreen();
      
          void setNextScreenHint(String nextScreenHint);
      
          Collection<String> getChildren();
      }

      I wouldn't complain much, 5 method, all them explained above. Easy to use!

      Working example (please find the entire code here)

      public void simpleInitApp() {
          NiftyJmeDisplay niftyDisplay = new NiftyJmeDisplay(
                  assetManager, inputManager, audioRenderer, guiViewPort);
          Nifty nifty = niftyDisplay.getNifty();
          guiViewPort.addProcessor(niftyDisplay);
          flyCam.setDragToRotate(true);
      
          nifty.loadStyleFile("nifty-default-styles.xml");
          nifty.loadControlFile("nifty-default-controls.xml");
      
          DefaultInstanceResolutor defaultInstanceResolutor = new DefaultInstanceResolutor();
          ScreenFlowManager screenFlowManager = new ScreenFlowManagerImpl(nifty, defaultInstanceResolutor);
      
          LiveInstanceResolutor liveInstanceResolutor = new LiveInstanceResolutor();
          defaultInstanceResolutor.addResolutor("static", new StaticScreenGeneratorResolutor(nifty));
          defaultInstanceResolutor.addResolutor("live", liveInstanceResolutor);
      
          RootScreenController screenController = new RootScreenController().setScreenFlowManager(screenFlowManager).setApplication(this);
          ScreenController screenController2 = new Controller2(screenFlowManager);
          ScreenController screenController4 = new Controller4(screenFlowManager);
          liveInstanceResolutor.addController("root", screenController);
          liveInstanceResolutor.addGenerator("root", new RootScreenGenerator(nifty, screenController, screenFlowManager));
          liveInstanceResolutor.addController("controller1", new Controller1(screenFlowManager));
          liveInstanceResolutor.addController("controller2", screenController2);
          liveInstanceResolutor.addController("controller3", new Controller3(screenFlowManager));
          liveInstanceResolutor.addController("controller4", new Controller4(screenFlowManager));
      
          liveInstanceResolutor.addGenerator("generator2", new Generator2(nifty, screenController2));
          liveInstanceResolutor.addGenerator("generator4", new Generator4(nifty, screenController4));
      
          try {
              screenFlowManager.addScreenDefinition(new ScreenDefinition("root", "live:root", "live:root"));
              screenFlowManager.addScreenDefinition(new ScreenDefinition("screen1", "live:controller1", "static:/screen.xml"));
              screenFlowManager.addScreenDefinition(new ScreenDefinition("screen2", "live:controller2", "live:generator2"));
              screenFlowManager.addScreenDefinition(new ScreenDefinition("screen3", "live:controller1", "static:/screen.xml"));
              screenFlowManager.addScreenDefinition(new ScreenDefinition("screen4", "live:controller4", "live:generator4"));
      
              screenFlowManager.addFlowDefinition("root", Optional.<String>absent(), newArrayList("root")); 
              screenFlowManager.addFlowDefinition("screenFlow1", of("root:root"), newArrayList("screen1", "screen2", "screen3", "screen4"));
              screenFlowManager.addFlowDefinition("screenFlow2", of("root:root"), newArrayList("screen1", "screen4"));
      
              nifty.addScreen("redirector", new ScreenBuilder("start", new RedirectorScreenController().setScreenFlowManager(screenFlowManager)).build(nifty));
              nifty.gotoScreen("redirector");
      
          } catch (InstanceResolutionException e) {
              e.printStackTrace();
          }
      
      
      }

      In few words:
      1. Create and feed your resolutors with the information you'll need from the screen definitions.
      2. Create your Screen Definitions, I have defined 5 screens, two of them from the same XML and other three generated from java code. Here lays the potential of nifty-flow, the static screens would be reusable even across projects, without knowing what screen to link forwards and backwards.
      3. Create your Screen Flows, I have a root one with one screen, then from that screen, another two reusing some screens.
      4. Add redirector (with that name) as "start" and let the magic happen.
       Final words

      There's some more work to do. I implicitely create NEXT and PREV links between screens, but I'd rather have implemented a mechanism to push every movement to a stack, and a POP would always return to the previous screen, this would allow us to jump to the middle of another flow and being able to come back (PREV is "statically" linked to the same screen, while POP could be any invoker).

      From the features I was missing in nifty-gui, I still keep missing some help templating XML files, instead of making complex screen from java only, that's something I'd thrill to implement, but god knows I don't have the time now.

      Hope you enjoy of this library

      Alber

      sábado, 2 de mayo de 2015

      Some lessons learned: Gradle



      After a year dealing with Gradle in different companies, I think it's high time to start collecting some patterns and suggestions.

      1) Avoid redundant folder declaration.

      Remember, apply plugin java already sets up some defaults folder, you don't need to declare them if you don't plan to change them.

      Recommended
      apply plugin: 'java'

      Redundant
      apply plugin: 'java' 

      sourceSets {    
          main {        
              java {            
                  srcDir 'src/main/java'        
              }        
              resources {            
                  srcDir 'src/main/resources'        
              }    
          }     

          test {        
              java {            
                  srcDir 'src/test/java'        
              }        
              resources {            
                  srcDir 'src/test/resources'        
              }    
          }
      }

      Groovy plugin and scala plugin also declares other folders, just read some documentation before start using them.


      2) Choose your syntax for dependency management.

      Gradle allows you two syntax for dependency declaration.
      • Extended (group: 'ws.lazylogin', name: 'lazylogin-core', version: '1.0.0')
      • Compact ('ws.lazylogin:lazylogin-core:1.0.0').
      Be consistent, your users and colleagues will be pleased.

      I prefer the compact one :)


      3) Your users don't need to know project details to compile it.

      Any project should be able to clean + build from the code and a JVM in place. Ideally nothing else should be required.

      Although some projects will inevitably break this suggestion, try not to.

      Some examples of build breaking customs.
      1. Usage of variables declared in ${user.home}/.gradle/gradle.properties that  don't have default value in build.gradle or exist in project's gradle.properties.
        • This would fail for anyone not owning / not knowing what variables to include / contaminate hi/er user gradle properties file. No no no no no.
      2.  Assumption of executables in the PATH or some predefined location.
      Try your harder to make your project easy to build.


      4) Prevent incorrect dependency versioning

      Declare your dependencies and version in a single place (that was a good idea in Maven's universe) and reuse those dependencies by its variable name.

      The risk of not following this advice is the likelihood you'll end up using different versions for the same library in different modules in the same project, and you don't want that, neither do I.

      Solution would look like this:

      Dependency declaration (they can be declared in a different file)
      ext{   
          //VERSIONS
          v = [           
              spring: '3.2.0.RELEASE',
              jme3: '3.1.0-snapshot-github'
          ]   

          deps = [           
              //SPRING           
              spring_core: 'org.springframework:spring-core:'+v.spring,
              spring_beans: 'org.springframework:spring-beans:'+v.spring,
              spring_context: 'org.springframework:spring-context:'+v.spring,
              //JME3           
              jme3_core: 'com.jme3:jme3-core:' + v.jme3,           
              jme3_effects: 'com.jme3:jme3-effects:' + v.jme3
          ]
      }

      Dependency usage for any subproject
      dependencies {   
          compile (           
              project(':mod-api'),      
              deps.jme3_core,           
              deps.jme3_effects,           
              deps.spring_beans,           
              deps.spring_core )
      }

      Instead of the more error prone way:

      Dependency usage for a given subproject A
      dependencies {   
          compile (           
              project(':mod-api'),                        
              'com.jme3:jme3-core:3.1.0-snapshot-github',           
              'com.jme3:jme3-effects:3.1.0-snapshot-github',           
              'org.springframework:spring-beans:3.2.0.RELEASE',           
              'org.springframework:spring-core:3.2.0.RELEASE' )
      }

      Dependency usage for a given subproject B
      dependencies {   
          compile (           
              project(':mod-api'),   
              'com.jme3:jme3-core:3.1.0-snapshot-github',           
              'com.jme3:jme3-effects:3.1.0-snapshot-github',           
              'org.springframework:spring-beans:3.1.0.RELEASE',           
              'org.springframework:spring-core:3.2.0.RELEASE' )
      }


      5) Group your dependency by dependency configuration.

      It's nicer :)

      Dependencies grouped by configurations
      compile (           
          project(':mod-api'),           
          libs.lazylogin_common_context,           
          libs.nifty,           
          libs.jme3_core,           
          libs.jme3_effects,           
          libs.spring_beans,           
          libs.spring_core,           
          libs.jackson_dataformat_yaml,           
          libs.jackson_databind
      )
      runtime (
          libs.nifty_default_controls,           
          libs.eventbus,           
          libs.auto_value))    

      Dependencies ungrouped (and disorganized)
      compile project(':mod-api')          
      compile libs.lazylogin_common_context,           
      compile libs.nifty,           
      compile libs.jme3_core,           
      compile libs.jme3_effects,           
      compile libs.spring_beans,           
      compile libs.spring_core,     
      runtime libs.nifty_default_controls,      
      compile libs.jackson_dataformat_yaml,           
      compile libs.jackson_databind,           
      runtime libs.eventbus,           
      runtime libs.auto_value)

      (Not possible when you want to exclude dependencies from dependencies).