One thing that seems to come up with some frequency in SOA circles is creating a checklist that would help make the development team, the architecture group, SOA Governance board, and the business customer feel confident about rolling out a service--something that would act as a guide so that developers know they're making something appropriate and useful, to help encourage good SOA design principles, and to make sure that the business actually realizes the benefits of SOA (instead of just moving the problem).
So I thought I'd put this checklist together and hopefully save you some effort. This is probably something I should included in Java SOA Cookbook, but here it is now.
This checklist for Service Implementation includes 90 questions for ensuring that a proposed candidate service will make a good addition to your catalog, and has a good chance of being successful after it's gone live.
Here’s how this is intended to be used in the context of SOA Governance:
- Introduce this Review Checklist to the Application Architect or development team who will design and realize the service—before they start working on it. If they know this test will be coming at the end, they will account for these things. Just as with Test-Driven Development where you write the code that makes the test pass, this is Test-Driven Service Design—create the service that will pass the test below. No Programmer Left Behind!
- As the service is in development, the team should consult back with this checklist to make sure nothing is being forgotten. Of course, not everything will apply for every service.
- When the service is nearing the end of the development cycle, the Application Architect or whoever is the lead developer of the service should present her service before the SOA Governance Board, so that they can be responsible for introducing it to production deployment and the service catalog. If too many items have been left unconsidered by the development team, don't let the service be promoted.
- What business assumptions have been made? What risks do they pose?
- What system assumptions have been made? What risks do they pose?
- What specifically does this service do to address the following:
- Decrease costs
- Increase revenue
- Promote agility
- Promote productivity
- What is the business governance domain for this service?
- What is the category of this service?
- Stateful Business Process (Employee Onboarding, Return Merchandise)
- Business Entity (nouns such as Employee, Customer)
- Business Functions (verbs for atomic actions in a process). May also be Event Handlers
- Utility (perform an application-agnostic function such as Email)
- Security Service (handle identity, authorization, privacy)
- How has the design iterated or evolved? Did you start by considering the consumer view?
- Describe how the design used a “middle out” approach.
- Where are the tightest couplings with other services, other systems, etc?
- How do you anticipate this service being reused? In what systems? By what kinds of consumers? How can modifications be minimized?
- What patterns from the Service Design Patterns catalog have been employed? (If you don't have your own internally approved SOA Patterns catalog, start out using Thomas Erl's excellent http://soapatterns.org).
- Have you followed relevant organizational implementation standards (Java coding conventions, etc)?
- How have you accounted for internationalization? How will your service support localization (eg, return different data based on geographic location, formatting concerns for currency, language, and other items)?
- What transports does your service support (SOAP, JMS, HTTP, etc)? Why were those selected?
- Does the service make use of or allow for user preferences (eg, number of results returned, etc).
- What are the basic Message Exchange Patterns used for this service?
- How does the design support an event-driven approach?
- Does the service support purely stateless connections (unless it is a business process service)?
- Do service operation definitions support typical variations in the domain?
- Have you avoided any messages, operations, or logic that are consumer-specific?
- Are all operations capable of being executed independently, without necessarily relying on any previous invocation of another operation?
- Are all data access operations idempotent?
- Does the service offer a variety of operations for retrieving minimal, most common, and full data sets?
- Does the service use only standard logging facilities and a log rotation strategy?
- Name all external systems called by this service.
- Does this service wrap an existing legacy system or database? Could that system be entirely replaced by a newer or different implementation without affecting consumers?
- How does the service capture inputs and outputs as business documents? How does your level of abstraction avoid RPC?
- Does this service not directly invoke another service, but instead pushes such invocations up to an orchestration?
- Have any specific business processes have been identified that can use this service in automation?
- What business rules have been identified that can be extracted to a business rules management system or external rules engine?
- Does the service reference any business rules that may feature thresholds or other items that could be configured by a business user, or are they baked into code?
- What KPIs have been identified for the service?
- How does the design fulfill the functional requirements of the service?
- How have the boundary cases been considered?
- Describe how this service accesses data, what data it accesses, and where.
- Are transactions required? How does the design handle transactions? Has compensation been considered as an alternative?
- Describe how this service fully encapsulates its data. If it cannot at this point, what is the transition plan for doing so?
- Describe how this service uses the Canonical Data Model.
- How does the service perform validation on incoming data? How does the service respond to invalid inbound data?
- How does the service account for data quality?
- Have you externalized strings?
- Have relevant current and potential future consumers been consulted on the service contract?
- Will this service use virtualization? Why or why not?
- How does the service support the principles of Service Orientation?
- Loose coupling
- Standard Contract
- Does the service contract need to vary for external business consumers as opposed to internal application consumers? How so?
- Does the service use only standard message return codes and user-friendly descriptions?
- What checked exceptions (faults) does the service offer? Under what circumstances can they be generated?
- What runtime exceptions are likely to be generated from the service? What result do you expect in consumers receiving runtime exceptions? How can this be mitigated?
- What is the measured latency of service response in testing?
SLAs have been defined for this service? What mechanisms are in place to
SLAviolations? What mechanisms are in place to report SLAviolations?
- What steps in an orchestration can you design to be executed in parallel and joined later?
- How does the design encourage asynchronous invocation?
- Are the operations designed at various levels of appropriate granularity so that they are not prone to network chattiness and do not return data clients are not likely to need?
- How does the design delineate between operations that must be performed quickly and operations that are long-running?
- Can the service be scaled be adding more nodes running the service? What might prevent this?
- What is your caching strategy behind the service implementation? Can known consumers easily cache data in front of the service? How will this be managed (eviction policy, invalidation, etc)?
- If your messages use binary data, do they employ MTOM?
- Does your design allow for clients to select variations on an operation based on their context? For example, do you offer both doXandWait(m) : Response and a doXLater(m) : Void operation options?
- Does the service require authentication? Authorization? How are these implemented?
- What other regulatory constraints (PCI, Sarbanes-Oxley, etc) might affect this service contract or deployment? How have those been directly accounted for in the design?
- What are any additional security requirements for this service? How are they fulfilled?
- Does the service allow for auditing? How is that implemented?
- Are logs free from PCI or PII information?
- How many unit tests are available for this service?
- Are all unit tests independently executable (ie, not dependent on the successful run of any prior test)?
- Were test cases created for every user function? Did the tests use a variety of data inputs (valid, invalid, null, many different combinations of length and character)?
- Were test cases created for all exception conditions?
- Were test cases created around a generated client (in Java or .NET)?
- Are the unit tests in version control, and versioned in clear correspondence with the service so that the environment can be entirely reproduced?
- What is the test coverage percentage for this service?
- If this is a new version of an existing service, have you tested directly for backwards compatibility issues?
- What functional tests were written if a consumer is available?
- How was the service load tested? What metrics were recorded?
- If the service uses asynchronous or fire-and-forget operations, were these tested by subscription?
- Are the service and attendant schemas appropriately versioned?
- What can be retired or sunsetted after this service is deployed?
- Have you externalized configurable data?
Availability & Support
- Will the service be highly-available? What are the availability requirements? How will these be met? What is the business impact if the service is down for 1 minute? 5 minutes? 30 minutes? 1 hour? 4hours?
- How will availability be measured?
- How will the production support team receive messages or alerts regarding the current state or health of the service?
- How will runtime issues with the service be addressed organizationally? Has an on-call schedule been established?
- What visibility does the service directly offer (eg, in the form of JMX) to different support teams?
- Does the service require planned downtime for maintenance? How much time, and how often? What is expected to be performed during this down time?
- How have you involved the infrastructure team in the creation and design of this service?
- What is the plan for future maintenance of the service after it is successfully deployed?
- What problems is service deployment likely to pose?
- Will the service be load balanced?
- Have you captured the design in the Service Template?
- Has the service been captured
- Have you followed relevant standards for code-level documentation?
- Have all test execution results been recorded (eg, in the Maven site)?
The blog ArtOfSoftwareReuse.com has a very good checklist too, and I've incorporated some of the excellent work done by Vijay there at http://bit.ly/3UJaNk. That checklist has fewer than 50 items; mine is approximately twice the length. Of course, it's expected that you'll use only what makes sense for your organization. Good luck!