What goes into requirements at what level of detail and accountability?
Financial Rebalancing example: High level ask: Given a investment account generate a set of trades to bring it inline with a model and allow execution of those trades after review.
Basic Requirements including UX (graphics are described): Allow user to select a model and an account, provide a set of trades for review, allow them to select which they want to execute, after submission provide back results.
Biz also describes a list of 20 something validations each with specific error copy. Some of these only apply to the API that generates the trades, others apply only to trade submission (mostly security related), and the rest apply to both. They have a specific priority that if multiple occur, which to display to the user…
This brings me to the issue which we have several implementation choices, one question is who should be deciding between them: 1. We could implement each check as an isolated function that take the model and accounts ids, gather data and validate. This means however each function is repeatedly hitting the database which potentially impacts performance. 2. We could implement as 2 distinct APIs that do not share code. This means that there is code duplication and when a check needs to be changed, we need to change it in 2 places. 3. We can implement a mix, each API and each function take various amount of data probably gathered slightly different but no data is gathered twice and no check is duplicated. IMO this sounds like the best approach, however it’s also the most time consuming to define up front.
The next question is accountability of making that choice, and impact of it’s cost in the future to rewrite and impact analysis? If the decision is made in the code an only documented within the code’s documentation (java doc format) impact analysis could only be done by technical. Should not technical be doing impact analysis, for a given change a set of test cases need to be created. Assuming that automated testing doesn’t have 100% coverage and manual regression testing is done, it means that the impact analysis must be done prior to development if test cases are being done prior to development.
What level of detail needs to go into requirements, is UX requirements or something else? For example lets say we have a Salesforce integration. One our sidebar we have a button that when pressed opens salesforce using SSO in a new window if the user is already linked or launches the authentication oauth flow for salesforce.
The reqs as: Have “Salesforce” in sidebar if user is entitled. If user is not linked launches Salesforce Authentication flow in new tab, otherwise launches salesforce using SSO
or they could be as complicated (simplified version still skipping some UX) as: Salesforce in Sidebar: – Displays only if user is entitled to View Salesforce – When clicked gathers Salesforce URL using Salesforce SSO API – on success, success opens that URL in a new tab – on failure, gathers salesforce oauth url using Salesforce Login API, then opens a popup window using that url that is 500px by 540px centered on current window which when resolved will redirect the user to the Salesforce SSO Callback API
Sorry I think this dissolved into more of a rant without a purpose but at my current company we’re struggling to figure it out.
Without the more detailed requirements when something is changed, impact analysis is left up to the developers which is usually within their development process, the impact of changing A and breaking unrelated feature B except that they both pull data using a shared query is where the problem lies mostly i think. Additional testing would need to be added to retest B before A could be released.
It’s a small company, less then 11 people involved in the entire process, outside of that anyone in the process would really be considered “clients” as far as the development process is concerned.
submitted by /u/jegsar
[link] [comments]