Archive for the ‘Product Development’ Category

Articles

False assumptions on the minimum viable product

In Architecture,Business,Marketing,Product Development,Uncategorized on September 16, 2010 by petrem66

If you use Geoffrey Moore’s book ‘Crossing the Chasm’ as a general guideline on how to tap into a profitable market niche on the long run, the Lean Startup concept must be strongly refined. True, that the aim is still to lower the waste to the minimum during startup, but one has to admit that conducting market research with screenshots, mockups, or partially working prototypes is also a form of waste. Who is going to take you seriously and allocate precious time with you talking about screenshots?. Friends, colleagues, relatives, people who are not busy running real businesses. I’ve talked to people in my social network and it is hard to convince them to even consider introducing me to really busy people. Do I have a case to push for that? No, not yet. I know that the path to the ‘chasm’ is a different one though

I think that the absolute minimum to get to playing the market fit game is to have a functioning core product which is easy to customize at the front end (that is UI) to allow for testing market hypothesis as they’ll come. Only after that is done I can afford to do proceed with testing – learning – realigning or extending, and when possible get paid for any service I will provide to customers my product will manage to get

What is the functioning core product?

Generally speaking the core product consists of independent functionality, a minimum set of building blocks absolutely needed to use in any ‘solutions’ to solve in the problem domain. In all startups a common building block is the payment functionality. The building blocks must help lower the cost of putting together the minimum viable product so that when testing a market hypothesis you can charge for service should you find a real customer. Only by making a revenue you know for sure that your solution solves a real problem and hope for a solid market traction

If after a long struggle to find customers and problems to solve, you’ve got a customer but cannot charge for your service that’s bad. A business is about selling goods and services, not volunteer work. Signup for membership is also part of the core functionality

In my case, the document engine is also a core building block since my problem domain with www.documentclick.com is all about documents

What core product is not?

Although there are aspects in it that are part of the core functionality, the user interface in general must not be listed for the core product. It will be refined over and over for better SEO, appeal for customers, usability, etc. Adapters to third party platforms (such as integration with salesforce.com, zoho.com, or google apps) may come later as you discover a market niche for them.  SEO is not yet a concern.

Building generic frameworks to assemble building blocks in never a good idea. There are at least a few dozen excellent frameworks out there (including CMS/DMS) that can be ‘customized’ with minimum cost to include your building blocks and specific UI for a minimum viable product

Advertisements

Articles

Apache FOP

In Open Source,Product Development,Technology,Uncategorized on May 12, 2010 by petrem66

I’ve been using this transformer for many projects so far. It is quite powerful in the way that it can build documents based on a structured definition that’s built according to the XLS_FO schema. The utility of such package is obvious since building dynamic documents is necessary in many B2B or B2C applications, and Apache FOP is good when processing speed and memory footprint are not issues, but it falls short on performance.
The package operates on a four step transformation of the input definition (the XLS_FO document). First, the package loads and configures its main objects, including the renderer, and loads the available fonts into internal objects
Second, the package loads the input constructing a hierarchy based on FONode objects (Root is at the top) using a plain old SAX parser and a custom made Content Handler.
The third step kicks in when on of the page sequence elements in finished building at the end element method call of the Content Handler. The step consists of transforming the FONode based hierarchy into an internal format based on Block and its specialized subclasses. This structure is an abstraction on the layout of the document, a sort of an medium independent view.
Finally during the fourth step, based on the chosen renderer (one has to pass in the mime type of the expected output), the Block based hierarchy is rendered to a final output that can be PDF, PNG, TIFF, JPG, HTML or even RTF.
Based on my experiments with Apache FOP, I am confident that it use 40% of its processing time and cpu on the first step, 30% on the second, and 20% on the third and 10% on the last. What does that mean?. If for a 3 page document the transformer gets it done in 2 sec, one can be sure that it has burned 1.4 sec to prepare and load the XLS_FO, and the rest (0.6 sec) to do the actual job. That’s quite a limitation, and I think it can be done better.
The first think to improve its performance it to rewrite the code pertaining to steps 1 and 2. The challenge is that its not easy to replace the FONode based hierachy with something lighter such as a XmlBeans based

Articles

On unit testing

In Product Development,Uncategorized on April 28, 2010 by petrem66

I think it will take a how lot of time to do proper unit testing. It is like coding twice as much as it should be. The benefit though it is that one can verify a lot faster that modules behave as expected. There are some very good points about unit testing at writing-great-unit-tests-best-and-worst-practises
The big question still remains about how granular the testing should go. There are two main ‘camps’ of unit testing supporters that I know of.
People in the first camp believe that the best results are to be obtained when unit testing at component level, such as at class level. One should not leave out any visible functionality of it, and with the unit test both negative and positive use cases are to be covered. I have an issue with this approach in that with growing code, and an evolving product it is very hard to maintain the integrity of the unit tests. The second camp, align the unit testing to the structure of the application, that is, unit test can follow the lines of delimitation between modules seen as a unit. Remember some design patterns like the session facade, people in the second camp won’t bother testing such a class since it merely pass on requests to other classes. Further, when you use dependency injection with Spring Framework, the module unit testing becomes the only logical choice

Articles

On securing the persistent data on ‘cloud’

In Architecture,Product Development on April 15, 2010 by petrem66 Tagged:

The application’s foundation has to be build so that it will stand future heavy requirements like security and privacy compliance. One of the most important aspects that need to be well think of is information access outside the running code.
In any cloud based deployment, one can use persistent file system to store block of data like capabilities such as Cloud Files in Rackspace, EC2 in Amazon WS etc. This is especially true when such data must be shared among many server instances that form your application. Three parts are interesting to mention with regards to such block of data:
– encryption
– compaction
– integrity check

Encryption

A very detailed discussion about this topic can be found at Core Security Patterns. Suffice to say is that one can choose from various encryption algorithms at hand for Java developers for that. Since the block of data is not shared outside the application, one should go for a common encryption key. If that is hard-code somewhere in a reusable piece of code, it is safe to assume that it is highly unlikely that a malicious third party would be able to get it from there. Cracking the key of strong encryption algorithms like AES on 192 bit is quite a challenge, but even so one can imaging a strategy of changing such a key on a regular bases (like weekly or monthly) accompanied with a data migration task (re-encryption that is)

Compaction

In order to save on storage space and network bandwidth, one should consider archiving the blocks of data. Java runtime comes with a neat wrapper package of the GZiP compression called java.util.zip. The snippet of code below will do the job of achiving/expanding of your block of data:

public byte[] expand(byte[] buffer) throws Exception {
ByteArrayOutputStream os = null;
try {
os = new ByteArrayOutputStream();
ByteArrayInputStream is = new ByteArrayInputStream(buffer);
GZIPInputStream zin = new GZIPInputStream(is);
byte[] ret = new byte[4098];
for(int i=0; (i = zin.read(ret))>0;)
os.write(ret, 0, i);
return os.toByteArray();
}
finally {
if (os != null)
try {
os.close();
}
catch(Exception e){}
}
}
public byte[] archive(byte[] buffer) throws Exception {
ByteArrayOutputStream os = null;
try {
os = new ByteArrayOutputStream();
GZIPOutputStream zout = new GZIPOutputStream(os);
zout.write(buffer);
zout.finish();
zout.flush();
return os.toByteArray();
}
finally {
if (os != null)
try {
os.close();
}
catch(Exception e){}
}
}

Integrity check

From a consuming application prospective it is important to be assured that the block of data is not tampered. Usually, the producing application accompanies it with an MD5 based sum which it can pass along with the block of data descriptor to the consumer. The md5sum can be obtained with Java API using the java.security.MessageDigester (see the code snippet below)

public boolean match(String md5sum, byte[] buffer) throws Exception {
String s = getMD5sum(buffer);
return s.endsWith(md5sum);
}
public String getMD5sum(byte[] buffer) throws Exception {
byte[] sum = MessageDigest.getInstance(“MD5”).digest(buffer);
StringBuffer sbuf = new StringBuffer();
for (int i = 0; i < sum.length; i++) {
int c = (int) sum[i];
if (c < 0)
c = (Math.abs(c) – 1) ^ 255;
sbuf.append(Integer.toHexString(c >>> 4));
sbuf.append(Integer.toHexString(c & 15));
}
return sbuf.toString();
}

The string that results from calling getMD5sum can be stored along with the block name and passed to the consuming application as such.