rule-engine vs rules

If you remember one knowledge pearl about business rule engines, i think you may want to remember the relationship between rule engine and rules. I think these are the 2 main entities to *deploy* to an ent app.

Verizon’s circuit fault-isolator is a typical enterprise application using JRules. Think of your ent app as a host-app (or user or caller) of the rule stuff. There are quite a lot of rule stuff to *deploy* and you will soon realize the 2 main thingies are the (A) generic rule-engine and (B) your rules.

– The rule-engine is written by ILOG (or JBoss or whoever) but the rules are written by you.
– Rule Engine is a standard, generic component but the rules are specific to your business.
– The rule-engine is first and foremost the interpreter of your rules

* a good analogy is found in XSL transformer vs xsl stylesheet. Your host application need to load both of them into memory
* A similar relationship exists between spring the framework and the spring-beans you create.

"perl -p" is for one-liners only

You may be tempted to use
#!/usr/bin/perl -p

Top drawbacks, ranked
1) when complexities increase, ultimately the code maintainer will have to consider dropping -p switch and putting in a visible while() loop. Quite a few changes and lots of testing.
) what if u need to write to multiple files?
) inflexible — logic before/after the loop must be put in the BEGIN/END blocks

) the LINE label is invisible and confusing to some readers and maintainers

I think -p and -n are probably designed for
A) one-liners
B) scripts without growing complexity
C) scripts without other maintainers

How object A gets a reference to B

Background: when applying design patterns to your business logic, you need to describe your business-logic using domain object names (example? no)

For an Object A to get access to Object B, there are some extremely common patterns, ranked in terms of /incidence/ (ie how common they occur):

* A.method1 receives an \\arg//B
** A.method1 receives an \\arg//Y which is a collection containing B
** A.method1 receives an \\arg//K with a getter returning B

* A calls some method that returns a B [1]
** A calls some method that returns a collection containing B
** A calls some method that returns a D with a getter returning B

* A makes an explicit call to new B() [2] [3]. Common
* A “knows” B from birth — A’s constructor initializes an instance variable to a B object

[3] and invariably saves it in some instance/method-local variable
[2] or new C() which connects to a B via a collection or getter
[1] Longer version: A.method2 (usually interpreted as a behaviour of object A) calls some method (defined in any class) returning a B

each method call has a customer-object

J4: fundamental concept for design-pattern. This could be a simple concept, but it’s important to be thoroughly familiar until you can spot the pattern without thinking.

A method call [2] usually has a primary [1] “customer object”, ie the object to consult, to enrich, to read or to modify. This object can be
– the call’s hosting object. eg: student.setAge()
– the call’s argument object. eg: increaseCreditLimit(customer)
– the call’s initiator object. eg: this.amount = product.discount();
– some other object. eg: remoteControl.setCommand(TVon); # TVon object has-a TV object, which is the customer object.

For each method call, You need to quickly spot the customer object when you communicate with other people. Communicate during design, documentation, coaching, …

[1] rarely 2.
[2] we are talking about a call, not a method. Only when you use the method in a business logic can you put your finger on the customer objects.

autosys timing data won’t show on a Command

You want to see the exact timing data such as time of the day, etc.

Apparently, you can see it on a Box, not a Command.

When creating a job, you can specify the timing and system won’t complain! If you can’t see it later, then you can’t verify — perhaps the timing data is lost, silently.

–update:
Autosys trainer said yes you can have a /top-level/, /standalone/, /boxless/ command job with it’s own timing.

How that S’pore HR portal failed #letter to friend

Thanks for your 3 reasons. They sound very similar to the undoings of many other dotbombs of the era.

I feel funding often follows sales “results”. Investors (if you had them) are business people — business people look at results.

As to customization, I am no expert but would still put forward my 2 cents — many successful ERP/CRM packages didn’t initially offer a lot of customizability but still attracted enough customers.

I studied Mambo and sugarCRM (all claim to be customizable) in some details, as a developer and not a businessman. I think they are modular, extensible, plugin-friendly, with many skins, with hundreds of configurable parameters, but still show rigidity as soon as you try to get it to work “your way”. My experience with sugarCRM is, if I try 10 potential customers, at most 1 can manage to get it to work “his way”. Of course, the success rate could improve if I figure out more ways to be creative with sugarCRM. (drawing a subway map with spreadsheet?)

I think the key with this customization challenge is finding a special type of customers who don’t ask for customizations. I think you know what I mean. Each HR software was initially designed for a specific subset of companies. Because it’s designed for them, they don’t need customization. The next bigger circle of customers would need some but hopefully minimal customization.

“With our laser equipped, rechargable, transparent, odorless mouse trap, what homes should we target?”

On 10/28/07, Raja wrote:
>
> Hi,
>
> The company did not survive for 3 reasons
>
> 1. We were late to market. Not enough sales with all the competition.
> 2. Each of the customers wanted a lot of customisation to the base product –
> the technology and the framework we had at that time was not agile enough to
> adopt. We are talking about the early J2EE days 🙂
> 3. There were some funding issues also.

wiki for enterprise app documentation

Justification: jargon. In a non-trivial enterprise app documentation, usually there are too many (hundreds) jargons for an uninitiated reader. Wiki can help. Let’s look at an example. Under generous assumptions, every time a document mentions “Price Spread”, this word will become a hyperlink to the wiki page for this jargon.

Justification: linking. linking up hundreds of related topics.

Justification: Easy update. Everyone can update every wiki page, without approval.

ROLE of a DB table in a financial app

typical financial applications could have these groups of elements:
– lots (dozens) of tables
– lots of stored procedures
– lots of classes
– [c] lots of batch jobs, all in a scheduling system with their complete command lines
– [c] lots of standalone [1] batch scripts, not counting library files. Each standalone is started from the command line.

As in chemistry, elements interact. When learning about one element, It’s imperative albeit pains-taking to /map out/ [1] how it interacts with other elements. Tips for studying a db table:

– Your goal is, let’s repeat it, mapping out how the table interacts with other elements
– search for the table name in app source code. Many DB commands aren’t store procs.
– search in the stored proc source
– once u have an important stored proc name, search for it in the app source. Start with one stored proc
– for sybase, try sp_depends

[c = a strategic choke point, where you can search…]
[1] document if possible, which clarifies and confirms your own understanding.

perl var dumper "synopsis"

Q: For simple variables, a perl subroutine dump(‘price’) can dump @price content [1] along with the variable name — “price” in this case. But do we ever need to pass in a reference like dump(\@price, ‘price’)? [1] How about a lexical my $price declared in a nested while inside an if, wrapped in a subroutine?

A: I think sooner or later you may have to pass in ref, perhaps in a very rare and tricky context. To show the variable’s name, u need to pass 2 args in total — ref + name

[1] in dump(), print Data::Dumper->Dump (map {[$_]} @_);

studying a complex batch app (WallSt) — suggestions

Challenge: too many steps. Each usually represented by a function if well-modularized.
! Challenge: You don’t know how many of the steps are usually skipped and deserve no scrutiny.
Challenge: too many business rules
Challenge: too much branching including return/break/continue
! Challenge: Each run of the batch takes too long. run-edit-analyze cycle too long.

– Tip: identify and put aside “quick” steps in order to focus on the important steps. Subs that take a short time are usually less complicated or involve less database interaction.
– Tip: real benchmarking (or reverse engineering in general) requires good test data.
– Tip: initially, perhaps you prefer just a single record in input stream.
– Tip: if possible, output all the sql statements. If possible, Also “annouce” entry and exit of key functions, which provide context to the sql statements.
– Tip: identify non-essential yet slow steps to comment out. Non-essential = zero downstream impact
– Tip: At a key junction in an important function, when you print out a variable it’s immensely useful to see the call stack too.
– Tip: rename variables/functions — one of the safest and reversible changes (with perhaps one major side effect ie cvs diff prove …)