IDempiere workshop 2015/transcript
First day - Monday
Agenda
We looked into the agenda at IDempiere_workshop_2015 and talked about additional things:
- Commercial
- sales stoppers by areas
- iDempiere distros
- iDempiere light
- Forums - functional - or open an iDempiere-technical forum
- Collect information about implementation
- plugin to collect anonymous information
- Ux
- Usability of webstore - embedding into iDempiere
- JSF?
- Usability of iDempiere - UI - User experience
- Usability of webstore - embedding into iDempiere
- Functional
- Asset Accounting
- surcharges and discounts conditions
- pricing matrix
- global vs. specific price list
- BOM alternatives/optionals logic
- External notifications SMS, mailers, etc
- EMail-Marketing
- Order document type hardcoded business logic
- Workflows
- Quick Info
- LCO Global taxation
- Alerts
- System configurator organization
- Vision about manufacturing
- Average costing
- Reactivating invoices, payments, bank
- Dictionary
- Adding fields to sales orders - issues about positioning
- Every record must have a unique key
- Yes, about 20 tables have no key; most important is product price. Others are access tables. We do not need it for storage, translations. We can create a JIRA ticket and contribute a patch
- Technical
- OSGi in general
- Complex automated testing - jmeter, fitnesse, selenium
- RESTful webservices - token
- Dependencies of jasper libraries - jasper core cannot be replaced
- Mobile UI - Synchronize
- extending
- Best practices for commnunity plugins
- Events generic
- Replication
- Performance
- AWS
- Database scaling
- Deployment Process - best practices
- Memory consumption
- opening multiple windows
- Periodical restart
each record should have a primary key
Each Record should have a primary key: Primary keys are needed to audit changes in the database. That is needed for pricing. Chuck wants to create a JIRA ticket for that (JIRA). Someone has to work on that. We dont think that we need primary keys for Storage (too much overhead) and not for translations. There are some more tables to think about.
Performance
speed performance
- Chuck has the idea: Change the log4j output to be better parseable (creating a csv file).
- Carlos idea: window and view collecting all audit tables (create a window to say "what happened in this time in the system") (JIRA).
- We can add a log of SQL queries (while we are working in logging - thats not about performance but about debugging) (JIRA)
- traffic splitting can be done with pfSense. That allows to route iDempiere traffice over another network connection than users using youtube.
- Performance issues can be in network, apache/nginx proxy or in iDempiere. In Carlos experience in most cases problems in the user experience (complaints: "everything is running slow!") are outside of iDempiere. If iDempiere is the problme in most cases it belongs to bad code.
- If your server runs 100% there are tools to create a java vm menory dump. It can be analysed in eclipse to see what is happening
- You can run the server in debug mode but that hits performance (also when you are not using it). With debug mode Eclipse can connect to the running server and see which code the long running processes are executing
- There is no way to stop a running process. That can be changed if we change code of the processes but that is not easy to do everywhere. Up to now we have no interface and no user interface for that. We can use a template about how to stop a running thread (JIRA)
- Often the problem is the code (mostly self-written plugins): the best is to have short transactions
- If you have a very long running process that uses a transaction you have two possibilities:
- a) you can use a postgres command to increase the transaction timeout. That can make the whole system - especially for other concurrent users or processes - very slow
- b) cut your transaction in smaller pieces. That is the advised solution: Keep your transactions as short as possible.
- idea: schedule background job (JIRA)
- idea: force background based on parameters (JIRA)
- pgbadger is a tool to find out about postgres queries. It analyses the queries form the logs of postgres and helps to find out which queries take more or less time.
- pgtune can help tuning the postgres configuration
- You can solve some issues in complex postgres queries by adding indexes
- If the cpu is not at 100% but the process is stucked you have to think about locks. In most cases locks can be solved by writing the code better.
- examples are ad_sequence and m_storage. If you lock these in a transaction and use a long-running loop you create a bottleneck.
- you can sometimes refactor your code to get document numbers at the end of the process
- In pgAdmin you can see the postgres postmaster process id of a locked process. Carlos used a new column in AD_Changelog with a default value of "pg_backend_pid()". That hits performance but it helps to know which process belongs to which postgres postmasster process.
- idea: document sequences admitting holes (JIRA)
- If a table has a cache it has a ".get(...)" method. That reads That will be faster in speed and use less memory.
- Chuck to create (jira) ticket to make AD_Client_ID, CreatedBy, UpdatedBy to reference type of "Search" instead of "Table" or "Table Direct". The reason is to prevent the system from trying to deference in DB and store in memory values that will never be used in an read-only situation. Consider creating sql query to show all records where either (1) a table column does not have a corresponding field, (2) a table column only has fields that are flagged at read only or not displayed.
memory performance
- a memory issue can arise if you try to use a table/direct field on a big table. This kind of field will read all records to create the pull down list. There is a hard coded limit (of about 200 or so). That creates a message in the log but the user will get not all entries but only a part of it.
- using a search field instead of a table list increases performance a lot
- if you use validations or dynamic validations only the resulting (shorter) list will take memory in the browser. Doing a validation might also helps solving memory issues.
- There is the idea to not allow the user open the very same record in a second window or to make the second window read-only (and write a message to the user like "you openend this record twice")
- there is a plugin from Nicolas that a user can not log into a second time
- idea: restrict number of open windows - in general (JIRA)
Periodical restart
ADempiere needed a lot of restarts to run stable. iDempiere and Java 7 improved that a lot. In principle you can restart the server once a day. An idea is to have a script that restarts the server only if there are no processes running (JIRA).
Better search index
We spoke about using Lucene (that is e.g. used by Solr) as a search index. Norberts idea was to create a special type of search field that uses the Lucene index for search.
To use that search index you have to actualize the index after changes in the database. That does not have to be done synchonous but in another thread or such.
A search like "%key" can not be indexed in PostgreSQL nor Oracle. It creates a full-table search.
Another problem is that big tables need a lot of time to be shown if you forget to give a search key. This does not directly belong to the search index but to a better query. In MRole you can restrict the maximum numbers of records a query can get.
Database scaling
haproxy is a proxy that can load balance the users to many iDempiere application servers. All servers use the same database server. (Chuck Boecking uses haproxy for one and a half year. It is very well documented and for him it works very well.)
The database can be replicated by PostgreSQL to another server.
- The mode of replication can be "immediate". That makes sure that all clients have the same data. That makes commits (according to Carlos' tests) like 3 times slower. A problem in Carlos test is that the replica server shows a transaction as committed when the data is in the walog but not yet in the table. That can break iDempiere processes.
- In the mode "referred" you are not sure that the replicated server has the same data. That is not slower than without replication.
You can use "referred" for backups. You can loose two seconds of data without a performance penalty.
There is a program "pgpool". You can use it as a load balancer for PostgreSQL. Carlos tested it (with PostgreSQL 9.x). You create a postgres Master server and several read-only replicas. All queries go through the pgpool. All queries that change data or function calls that write have to be routed to the master server. Any select without a transaction can be solved in the replica (info windows, reports, no financial reports - they have a big performance hit).
A better idea than using PostgreSQL replication (based on the postgres logs) you can use pgpool replication. That replicates all the tests to all servers. That worked very good but the time was 5 times as long with two servers than with a single database.
A much better idea will be to separate the calls for read and write in the iDempiere code. Walking tree worked on that: http://blogs.walkingtree.in/2013/03/07/seperate-database-for-read-and-write-in-adempiere/ This is not yet in iDempiere (JIRA)
Chuck advises also pgbouncer. That allows to use much more application servers on a single database instance because you do not need memory for every connection. Without that every application server opens like 10 database connections and each of these start a postmaster thread in the postgres database that uses memory.
Deployment Process - best practices
Get updates for your installation from a p2 repository using the script update.sh included in the server installation.
Jenkins can be used by everyone - just ask for a user if need be.
Plugin installation in the server is better done using the console (putting the jar into the plugin directory) than using Felix because then you do not exactly know where the code is.
Staged deployment can be done on a "cascade" of servers, for example:
- test
- integration test
- user acceptance test
- production
Let the developers talk to each other to avoid e.g. the same column being created twice. Share changes using 2Pack.
You can use 2Pack to export almost any data and that way deploy e.g.
- new price lists for a tenant
- new windows to the system client
Be aware that you can not import a system 2Pack to a tenant.
2Pack has a feature to log UUID relations to record for example, that you imported data from a different tenant or installation and the data have new UUIDs in this installation or tenant. If you then import a second 2Pack containing updated data it would know the relation and would not create new entries but update the relevant entries.
2Pack can also be used to deliver "stuff" within a plugin which is then activated when the plugin is started. There are two different activators taking care of that. Refer to Developing Plug-Ins - 2Pack - Pack In/Out for more info.
Sometimes a 2Pack import fails during plugin start. You may then "force" the reimport of the 2Pack by editing the number in the packin window (e.g. from "1.0.0" to "0.1.0.0") to make the activator believe it is seeing a new 2Pack during plugin start and import it again. As the 2Pack is also made an attachment to the record you may also just try to start the import process from that entry.
Experience is that currently financial report setup is better exported using csv instead of 2Pack.
A 2Pack can be easily created from any page using the export button and selecting the "zip" format. Such an export of data can be very helpful to developers when users report a (supposed) bug.
AWS
Carlos thinks that Amazon services are quite expensive at the long run. He tried to use it and the servers were like 3 times slower than a dedicated server.
The application server and database server should go to different servers for any system that is not very small. That helps finding bottlenecks and allows better scaling. It is easy to use several application servers. Some users recommend to use like 20 users per instance of the iDempiere application server. This kind of server needs about 8GB of RAM.
Chuck Boecking and Norbert Bede have more experience with Amazon services and like how they scale.
stop processes
It is an idea to create a way to stop a process (JIRA). That means we need a UI for that (a stop button). We have to know what happes if the user closes the window and/or the session gets lost. We need a template how to write a process that can be stopped. You have to use the best practices of Java regarding to the isInterrupted method that allows to end a process that is insode a loop. We need also a way to stop long-running processes outside of iDempiere code. For JasperReport there is a way to use the MaxPagesGovernor interface to do that. And we need a way to stop a running PostgreSQL query (from another thread).
Second Day - Tuesday
At the beginning we discussed abou the Agenda for today. We want to talk about OSGi in general and about the pricing system and new ways to use it to enhance the way that prices are calculated.
Configuring iDempiere during startup
You can search the code in Eclipse for "System.getProperties". That shows some interesting possibilities to configure iDempiere during startup.
PostgreSQL connection parameters
The following runtime parameter allows to set additional parameters to the postgres connections.
-Dorg.idempiere.postgresql.URLParameters="defaultRowFetchSitze=1000"
debug SQL queries
There is a new parameter to see all SQL database queries that are used by iDempiere. You can not change that value in a running server. That might be an improvement (JIRA).
-Dorg.idempiere.db.postgresql.debug=true
connection pool
The connection pool that iDempiere uses is provided by c3p0.
You can change c3p0 pool parameters like the number of connections in the file PostgresSQL/pool.properties
connection leaks
Yesterday while considering performance problems and locks we forgot to talk about connection leaks. Now while we talk about the number of connections we do that.
You always have to close a connection. You always have to close a ResultSet and a Connection object in a finally block. That can happen in a hard to find way if you reuse a variable e.g. for a statement (and not close the former object). If you dont use finally blocks and close the connection that the connection will stay open in the PostgreSQL server. That is a quite expensive resource to leave open. You can see that the postgres server has more and more connections open. It will close after the connection timeout of postgres.
Carlos said that there were many close commands without a finally block in the ADempiere code. Today all of them should be done in iDempiere but you can not be sure about plugins.
Best practices are to use the DB class "DB.get(...)" or the Query class.
best practice tags
Chuck Boeckings idea is to create "BEST PRACTICE" tags in the code to mark best practices in the code. (JIRA)
Diego Ruiz did a list of best practices in the wiki (Contributing_to_Trunk).
Pricing System
How does it work, what is part of it
These tables are used in the pricing system:
- M_PriceList
- M_PriceListVersion
- M_ProductPrice
- M_ProductPriceVendorBreak (it works not only for vendors since Deepak added that it works for customers too)
The pricelists are set in BP and BPG. It is used (and can be changed) in Order and Invoice.
There is also a DiscountSchema (with SChemaLine and SchemaBreak).
The end date of a price list version is the date of the next version (there is no end date in the record).
The Discount Schema Break can be used to calculate a discount that changes the price list. You do not have to copy the price lists to use that. It is calculated when the price is used.
There is a contribution from Adaxa (Dirk applied it some time ago but it does not work any more) to have a uom field in the price list.
Another thing that can change prices are the promotions.
change the pricing system
If you want to change the way that the prices works there is only one class to change: MProductPricing. But there are also three sql functions and some views that use the prices. These functions compose the price of a BOM if there is no price set. It may be a compromise to use the advanced functionality only if there is no bom or there is always a price for boms set.
- bompricelimit
- bompricelist
- bompricestd
These functions have to be changed if you use a new price system. Or you can
- the view m_product_substituderelated_v is shown on the InfoWindow
A better approach might be to use a callout to override how a price is calculated.
The database functions show the base price. If we want to add dynamic conditions we have to do that in java code. The dynamic price can not be used in views and reports or in the InfoWindow.
Shaun Melville has a usecase where you need a matrix to find an entry in a "fee" table that is used to calculate the price. That is a usecase where you need a lookup in a matrix based on some fixed sql code. It is not dynamic but can not be done with standard iDempiere pricing.
Anton Fildan showed us how SAP does it: http://help.sap.com/saphelp_46c/helpdata/en/dd/56168b545a11d1a7020000e829fd11/content.htm?frameset=/en/dd/56168b545a11d1a7020000e829fd11/frameset.htm¤t_toc=/en/de/7a8534c960a134e10000009b38f83b/plain.htm&node_id=4&show_children=true#jump4
Improvements can be done in the price list table to make it possible to use the uom (unit of measure) and asi (attribute set instances) in the price list matrix. This gives us a matrix to create the base prices for InfoWindows, reports, etc.
The second step is the dynamic part of the pricing. It is done iwth the DiscountSchema based on Product Category, Product, Qty. You can think about extending that too.
If we want to change the MProductPricing class it needs to become an Interface that can be exchanged by OSGi. Some of the informations that we want to use can only be accessed by the order line (or invoice lines). We can use Attribute Set Instances (asi) for that (and fill them with a callout). But that may lead to problems if you exchange the product or not save the order line. Or we can extend the class to get the context (and window and tab number) of the window. That looks like the best approach.
Summary what can be done with the pricing system
- adding uom to product price (there is a patch from Adaxa/Dirk Niemeyer - look for Price UoM extension)
- adding asi (attribute set instance) to product price
- adding uom/asi on the discount schema break table
- discount schema break table not only by percentage but also amount. That can be overriding or add amount and a field can say if it stops the calculation or continues.
- If we can make M_ProductPricing an OSGi interface we can do everything in the most flexible way. It has to be extensible (e.g. by overloading) and needs some more constructor parameters like the Context.
Promotions
Promotions are also a way to change prices. You can look at this page for some documentation and that page to read how to enable it: http://www.adempiere.com/Enabling_Promotions. You activate it by adding a model validator class in the client window.
First you define a promotion group. With that you can group products like coca cola and coca cola zero for example.
Then you define a Promotion. You can set an campaign that belongs to accounting.
- Pre Conditions tab: The promotion is applied in several conditions. You can set different dimensions like a bp, a bp group, a pricelist or such. You can set a limit that a promotion is given only 1000 times.
- Promotion Line tab:
- Quantity distribution: You can say sth like: If the qty is >= 5 add one more, or add one for every 5.
- Reward: What do you get as an reward
Deepak has a patch on jira to improve. Until now if you buy 6 you pay 5, the patch makes it possible that if you buy 5 one is added automatically for free.
Some people know that Adaxa has a good documentation about that and how to configure it. You have to ask Steven Sackett for more information.
At the end only one promotion per order line is used.
Chuck asked if we can pull the promotions out of the core as a plugin. He thinks that is easier to work with for private extensions of the promotion model. (JIRA)
If you enter a sales order the promotion is applied when the order is prepared (or even completed) with the DocAction button.
You can use a promotion code. You write it in the Pre Condition table and you use it in the order line to apply the promotion.
deactivate hazelcast cache
You can deactivate the hazelcast cache by deactivating the OSGi plugin org.idempiere.hazelcast.service in the osgi console (use "ss hazel" to find the number of the plugin in your running installation). When you start an hazelcast instance from inside eclipse there is no good configured hazelcast.xml file. That makes it broadcast a lot and creates wrong cache values and much traffic in the network - especially if there are several instances running in the lan. If you start iDempiere from the 3.x installation package there should be an prepared hazelcast.xml file that uses a hazelcast group to avoid connecting with other servers. You can even stop the broadcasting feature by deactivating broadcasts in the hazelcast.xml file.
OSGi
In ADempiere we had the "classpath hell". The core, patches, extensions and customizations etc. were all in one single classpath namespace. That made it very hard to care about the versions of all the jarfiles that are used from different parts of the code. OSGi solves that problem by separating namespaces for classpaths into different plugins. In OSGi talk plugins are called "bundles".
The different plugins allow to separate and isolate different things. You know who is responsible for which part of the code.
OSGi interfaces work by using implementations that are done as an OSGi interface and use Factory classes to find them through the OSGi layer. There is an entry "service ranking" for every implementation.
Costing
Warning - :-) I did not really understand everything of that while doing the transcript. Beware of errors. And please(!) correct it if you have more knowledge about costing. Thanks!
standard costing
The most simple costing is "Standard costing". It is used without further configuration.
average costing
We talk about "average costing". It does work if you follow certain rules.
You have a "Purchase Order", a "Material Receipt", an "Invoice Vendor" (the last two are connected through a "Match Invoice"). This Match Invoice updates the Product Cost.
For average costs you have to know about some restrictions. Carlos does not really know about them (as a list or such), so you have to try it out for your own business case. For example it is better to not create a Material Receipt before the Invoice is there and you can immediately match them. You should not reverse documents that are used but reverse accrual them.
Deepak showed us an interesting case as an example. If you receive a Material Receipt of 100 and our cost are 10€ at the beginning this will be a stock of 100x10€=1000€. If you take away 40 (60 left) there is a cost for that of 400€. Now we get the vendor invoice and it gives another price of 12€ (1200€). iDempiere creates a new cost entry of 13.33€. That leads to a balanced accounting. (Without average costing the difference is posted to the price variant account).
We talked about three different cases:
- If we have a stock and the po matches the invoice: No accounting on inventory and cogs
- If po and invoice do not match and we have a stock: iDempiere posts on the Inventory account
- if po and invoice do not match and we have no stock: iDempiere posts to the COGS account
thoughts about the future
Carlos idea is to have an OSGi interface for costing that allows to exchange it. The MCost class has to be refactored and it should be easier to do that in a plugin. (JIRA) The average costing has some issues that depend on the exact business case and it can make more sense to create implementations for specific cases.
Norbert Bede did a lot of changes to the average costing implementation on his internal fork and Deepak too. They want to contribute that but it will be much work to do that.
Chuck advises to set the default costing method to be "Average PO" (JIRA).
Carlos advises to create some asserts. For example you should not create a Material Receipt before you have an PO when you are using average costing.
summary about costing
- Refactor MCost
- Make a CostFactory for plugins to make cost calculation plugable
- Average PO becomes Average costing
- Average Invoice is to be inactivated
- periodic costing can be done as a plugin (Norbert implemented it as a flag in the accounting schema). That can be easier to use.
- we need use cases and test cases (e.g. with fitnesse) to check the new implementation (landed cost, estimated landed cost, and many other cases have to be tested)
- recalculate cost table (cut the data at a certain point in time and recalculate starting from that point)
- reposting has to be considered
Third day - Wednesday
overview of automatic testing in iDempiere
"Fit" (framework for integration tests) is a project that helps with automatic testing. An addition to that is "Fitnesse". That allows to create wikipages to define softwaretests that are applied using the Fit framework.
Selenium is a tool for user interface testing. That can be integrated with fitnesse but it is more complex to set it up and make it work with iDempiere.
Some JUnit tests are in the core but not very many.
JMeter can be used for load testing. Deepak has a tool to use that with iDempiere.
Chuck has also a process and a script for automated testing.
JMeter
JMeter is a tool for load testing. It allows to record HTTP traffic between the browser and the server. Then you can send this requests (changed e.g. by randomized data) with a lot of simultaneous requests to test the system behaviour under heavy load.
Deepak Pansheriya has some experience with it and shows us how to use it.
JMeter simulates all the traffic between the browser and the server.
- start JMeter GUI
- create a "Test Plan"
- add a thread group
- add http request
- add recording controller
- at "Workbench" configure at workbench add a HTTP proxy server
- choose a port (for example 8181)
- include pattern like ".*/zkau.*" and ".*\zul"
- exclude pattern like "./zkau/comet.*" and many more. That has to be documented better. Deepak wants to provice a sample JMeter project (JIRA)
- the proxy server has to be started with the "start" button. Now you configure your chrome or firefox browser to use the JMeter proxy.
Normally zk uses changing random ids for the components on the screen. For testing you use the Eclipse launch configuration "server.product.functionaltest". That includes additional plugins for testing that make sure that same ids are created for webpages. This is done in the class AdempiereIdGenerator(?). You can see that in the HTML Inspector of your browser.
The recorded html queries include a desktop id (dtid). zk uses a desktop id with every request. For things like load testing you need different desktop ids for every session. For that Deepak creates a random id on the call of index.zul (the first request). Deepak uses a csv file to read the dtid, tdtid (temporary desktop id), tenant, role using a "CSV Data Set Config" in the Thread Group configuration of JMeter. This csv file is read one line for every connection that is made simultaneously to the server. When looking into the request you exchange the dtid entries with a variable call like "${dtid}".
If you look into the zk request you see a field with id "uuid". That is the generated id (or a random id if you use standard idempiere and not the fitnesse plugins).
You have to hange all requests at the Recording Controller to use the dtid variable instead of the fixed dtid.
Now you can use HTTP Request Defaults
In the "Thread Group" configuration you can set how many thread are started in which time and how threads are used for that.
In the "Summary Report" of the "Recording Controller" you can see statistics after the tests run. You start the test by using the toolbar button with the green triangle. Then you can see that samples (a sample is a single request) are created.
JMeter slows down if it has a lot of threads. To get real data you should use several JMeter machines with every machine using like 50 threads.
As an example: Deepak worked with a configuration to test 450 concurrent users working with iDempiere at the same time.
Deepak wants to give us a link to a small sample JMeter configuration.
iDempiere installation hardware requirements
(While talking about JMewter load testing we thought about the hardware requirements that you need for different loads)
Some examples about hardware requirements:
- Deepak worked on a system with Trekglobal that could handle 450 concurrent users creating records and the like. We dont know the setup of this system.
- Carlos worked on a system with 256MB of memory. iDempiere worked on that with one user. He did not try financial reports or such things.
- The globalqss demo runs on 512MB (including the database). It allows financial reports and everything. It crashes from time to time but it works as a demo.
- If the question is: "How big is a virtual system to try out iDempiere for first steps of newbies?" our answer is: 1GB (or more) and one core.
Automated tests using Chucks Boeckings way
Chuck Boecking did a series "Automated and Regression Testing with Fitnesse and Selenium" in his ERP Academy. (You can ask him for access to that.) Today he does not like that. It was more harder and less flexible than he expected.
Chuck now uses iDempiere processes to create test data, do business logic and then uses asserts to check the results. He has a class with a collection of all best practice code snippets e.g. to create a business partner, to create a product, etc.
He uses this class in a process to do all the tests he wants. He can create records, start processes, etc.
- There is a ValueObject class ChueBoePopulateVO with all test data.
- There is a ChuBoePolulate class with all best practices like createBP(...), createPayment(...), etc.
- The class CreateReplenishData consists of code that is done one piece. There are several tests done in sequence.
Fitnesse
start Fitnesse server
Fitnesse is a framework to check functionals. There is a interface in the iDempiere server (a plugin that you can start) that allows the fitnesse server to access iDempiere from outside. There are several defined functions that you can user
In the Eclipse project there is a fitnesse project Launch configuration to launch the Fitnesse server. In the directory FitnesseRoot there are wiki pages that contain the tests.
You have to set a "String Substitution" of ADEMPIERE_WEB_PORT to the port of iDempiere inside Eclipse. That allows the fitnesse server to connect to the iDempiere server.
After starting the fitnesse server using the fitnessetest launch configuration you have access to fitnesse during the web interface. You need to run idempiere by launching not server.product but server.product.functionaltest. That starts two additional plugins.
given examples in the iDempiere trunk
In the server you find a good number of tests that are already prepared. In TestLoginGardenAdmin you can see how a test for a login works.
The next test we looked at was TestCreateBusinessPartner. That shows how to include the login test to do both things in a sequence.
TestCashPosOrder is a good example for a more complex order. It reads business partners and products, creates a POS order, completes it and asserts the grand total, balances, shipment, shipment lines, invoice, invoice line, payment, etc.
There is no command line tool or a jenkins plugin at the moment. We would like to have jenkins start the tests with every cycle. (JIRA)
An idea of Carlos is to write something to capture
FitRecorder
There is a class FitRecorder. You can write Fitnesse tests by using iDempiere with the zk interface. You can use it by adding the plugin for fitrecorder to the iDempiere server. It is a model validator with name org.idempiere.fitrecorder.FitRecorder (you have to enter that into the modelvalidator table).
At the moment the FitnesseRecorder does not work. (JIRA)
zk testing with selenium
Heng Sin extended fitnesse to work with selenium to do user interface tests. You need to install the selenium plugin on your firefox or chrome (or phantomjs) to use that. If you start it opens the browser and forces it to edit the website enering characters, clicks, etc.
JUnit testing
There is a plugin called org.adempiere.extend containing a number of JUnit tests.
To run them you need to create a file called test.properties defining the environment and setup to be used. There is a file testTemplate.properties in the plugin you can start from.
You can then right click in that file and select "Run as -> JUnit Test". More details can be found at the related ADempiere JUnit Test wiki page
plugin levels
Every plugin has a level in the run configuration of eclipse. The level is to sort the plugins during startup of the osgi container.
Carlos advises to use level 5 for your own plugins. That means that the iDempiere plugins (going up to level 4) are loaded and started before your plugin. You can even use level 6 if you have several plugins with dependencies.
