Some tips for new vCO plug-in developers (I)

Here is a small list of tips for those ones who start with the vCO plug-in development. Some of the tips are just some kind of advice that will help you to keep the things clear. Others may make you think about future and possible problems and how to face them up. And finally some other tips will be useful "just" to avoid simple (~ easy-to-fix) problems usually reported by the QA engineers before the final users.

I've grouped them by type, and I've divided one long post in two parts. This is the first one.

Project structure

  • Follow the standard de facto project structure and base it on a Maven project with modules:

/myAwesomePlugin-plugin: root of the plug-in project

/o11nplugin-myAwesomePlugin: module that composes the final plug-in DAR file.

/o11nplugin-myAwesomePlugin-config: module that contains the plug-in configuration web-app. It generates a standard WAR file. (optional)

/o11nplugin-myAwesomePlugin-core: module that contains all the classes that implement any of the standard vCO plug-in interfaces and other auxiliary classes used by them. It generates a standard JAR file.

/o11nplugin-myAwesomePlugin-model: module that contains all the classes that will help us to integrate the 3rd party technology with vCO through the plug-in. They can be developed with some plug-in ideas in mind but they shouldn't contain any direct reference to the standard vCO plug-in APIs. It generates a standard JAR file too.

/o11nplugin-myAwesomePlugin-package: module that imports an external vCO package file with Actions and Workflows to include it inside the final plug-in DAR file. (optional)

Project internals

  • Cache objects if possible

Our plug-in most probably will interact with a remote service, and most probably again this interaction will be provided by "local objects" that represent "remote objects" on the service side. For the sake of the plug-in performance and the vCO UI responsiveness we can consider caching the local objects and not getting them every time from the remote service. That's a good practice when it's implemented correctly. Here we can start thinking about the scope of our cache: one cache for all the plug-in clients, per user of the plug-in, per user of the 3rd party service. When implemented, our caching mechanism will be integrated with the plug-in interface for finding (and invalidating!) objects.

  • Get objects in background

If we have to show large lists of objects in our plug-in inventory and we don't have a fast way to retrieve those objects, one solution could be to get objects in background. That could be implemented for example by having objects with 2 states: "fake" and "loaded". Let's assume that the fake objects are very easy to create and they provide the minimal information we have to show in the inventory (e.g. name and id). Then it would be possible to return always fake objects, and when all the information (the real object) is really needed, the user (or the plug-in automatically) could invoke a method "load" to get the real object. Even the process of loading objects could be started automatically after the fake objects are returned, in order to anticipate the user's actions.

  • Clone objects to avoid concurrency issues

If we use a cache for the plug-in we have to clone objects. If we have a cache and we return always the same instance of an object to everyone who requests it, then there will be probably undesirable side effects. For example, user A requests the object O and he sees it in the inventory with all its attributes. At the same time, the user B requests the object O as well and she runs a workflow that starts changing the attributes of the object (the workflow will invoke at the end the object's "update" method to update it on the server side). If they get the same instance of the object O, user A will see in the inventory all the changes that user B is doing… even before they are committed on the server side! If the run goes fine it shouldn't be a big problem (maybe yes), but if the run fails, the attributes of the object O for the user A won't be reverted. Solution: the cache (the find operations of the plug-in) should return a clone of the object instead of the same instance all the time. In this way each user will see/modify his/her own copy and there won't be concurrency issues… at least within vCO.

  • Notify changes to others

When we use a cache and we clone objects we may have problems. The biggest one is that maybe the object that a user is seeing is not the last version available of it. For example if a user is displaying the inventory, the objects are loaded once, but if at the same another user is changing some of the objects, the first user won't realize about those changes. Solution: notify through the vCO Plug-in API (PluginWatcher, IPluginPublisher, etc.) that something has changed to allow other instances of vCO clients to see the changes. It applies also to a unique vCO client instance when changes from one object of the inventory affect to other objects of the inventory and they need to be notified too. The operations that are prone to use notifications are basically adding, updating and deleting objects when those objects (or some properties of those objects) are shown in the inventory.

  • Be able to find any object at any time

The find method from the IPluginFactory interface must be implemented to find objects just from the type and the id, it means without assuming that, for example, objects will be already in the cache or connections to 3rd party services will be already established. The find method could be invoked directly after restarting vCO and resuming a workflow, so perhaps "nothing" will exist before that moment.

  • Simulate a query service if we don't have one

The vCO client may require querying for some objects in some specific cases or showing them not as a tree but as a list or as a table, for example. That means that our plug-in must be able to query for some set of objects at any moment. If our 3rd party technology offers a query service we'll just need to adapt it and use it, otherwise we should be able to simulate it somehow, despite of the higher complexity or the lower performance of the solution.

  • Find methods shouldn't return runtime exceptions

The methods from the IPluginFactory interface that implement the searches inside the plug-in shouldn't throw controlled (or uncontrolled) runtime exceptions. This could be the cause of strange "validation error" failures when a workflow is running. For example, between two nodes of a workflow, the find method will be invoked if an output from the 1st node is an input of the 2nd. At that moment, if the object is not found because any runtime exception, probably we will get no more information than a "validation error" in the vCO client. After that, it depends on how the plug-in logs the exceptions in order to get more or less information inside the log files.

  • Prefix the name of our scripting objects

Inside the vso.xml file of our plug-in we have to define our scripting objects. One of the properties we have to define for our objects is the name. Usually the name will be the name of the Java class that implements the scripting object internally. And the plug-in users will use that name to work with our object, for example inside a piece of scripting code to instantiate a new object (e.g. new MyObject()). The problem is that if another plug-in in use defines an object with the same name it won't be easy to distinguish them. That's the main reason for prefixing our scripting object names with a short prefix that reminds the user about the specific plug-in.

To be Continued in Part 2