Skip to main content

Six Mistakes to Avoid when You Start a New Job

I just wrapped up a year-long research project investigating what happens when leaders take on new roles or responsibilities in their company.

For the last few years ‘onboarding’ – the process of acquiring, accommodating, assimilating and accelerating new team members, whether they come from outside or inside the company – has been a hot topic but it has focused almost exclusively on employees who join from the outside. But there has been a lot more internal movement than external hiring in the past year. Just because someone already knows a company, does it make their transition easy? We didn’t think so, and neither do the statistics.

Surveys by DDI (2006) and The Institute for Executive Development (2008) report 25% of people who move into a new job will fail. You don’t want to be one of those statistics. Paying attention to six common mistakes will help you make a smooth, successful transition.

Mistake #1: Assume you know what is expected of you

One of the biggest complaints we heard from new leaders was a lack of clarity around the role and their boss’s expectations. A key reason for this is that there simply isn’t enough candid conversation, before or after the move. Would you take a job at a different company without thoroughly investigating it and spending time with your new boss discussing her expectations? Put the assumptions aside and start asking questions – even if you think they are dumb. How does your boss see the role? What does he expect you to accomplish? How will that be measured? What do other people expect from you?

Mistake #2: Assume you have relationships with the right people

When people join a new company they invest a lot of energy building relationships with stakeholders. They know that these are critical to getting things done. When you move inside your company, you might assume you already know everyone. The reality is relationships are a lot more complicated than that – you take your history with you. If you have been promoted, people who used to be your superiors are now your peers. People who used to be peers are now subordinates. Not everyone is going to be happy with this arrangement, and not everyone will make it easy. A good strategy is to sit down and map out your stakeholders and realistically assess the history and strength of your relationship. You need to re-contract all of your pre-existing relationships, and you may need to build some bridges.

Mistake #3: Assume the culture is the same

Every function, every team, every level in a company has its own culture. Your job is to understand how it is different from what you are used to, and what you need to do to adjust to it. For people who are promoted, operating at a new level of team dynamics and politics often comes as a shock. The sooner you figure out nuances around group culture and norms, the faster you will be able to fit in.

Mistake #4: Forget to earn the credibility of others

When people come from the outside they spend their first 90 days working on an early win. They know it is critical to demonstrate why they were hired. What internal transfers sometimes forget is that their colleagues are waiting for them to prove themselves, too. People want to know why you got that promotion or were assigned to that high-profile project. It is not enough to rely on your history. You need to prove yourself.

Mistake #5: Take too long to figure out what you don’t know

When you’ve just been picked for a new job it can be tough to admit what you don’t know. After all, you’ve been around. People are expecting you to know, aren’t they? The likelihood that you are going to walk in and not know everything is pretty high. Figure out what that is sooner rather than later and don’t be afraid to ask the dumb questions.

Mistake #6: Ignore your development gaps

Every new job demands new things from you, things you may not have done before. You may need to delegate more, think more strategically, influence more effectively. Leaders often assume they are ready for a new job. We found that in the beginning, 75% believe they are adequately prepared and have the capabilities to be successful. By Month 10, this drops to 40%. It is more popular these days to talk about strengths, and you need to leverage them. But don’t ignore your development gaps. Address them before they become derailers.

Don’t forget to leave your comments below


Dr. Rebecca Schalm, who has a Ph.D in Industrial/Organizational Psychology, is Human Resources columnist with Troy Media Corporation and a practice leader with RHR International Company, a company that offers psychology related services for organizations worldwide.

Bad Ass BA; Peer Review. Part 2.

Co-authored by Rebecca Burges

Rubber Stamp or an Objective Quality Check?

This article is part two of a 4-part series on the topic of peer review of a requirements document.

Just to review, here are the four go/no go checkpoints checks:

  1. Business Case Synchronicity Check
  2. Requirements Document Sanity Check
  3. Requirements Statements Content Check
  4. Requirements Document Housekeeping Check

And here are the typical excuses not to do a peer review, and why those excuses are bogus.

  • It’s scary! They won’t say this out loud, but it’s true. You must separate your self esteem from the evaluation of your work by learning to accept criticism as constructive and valued, rather than destructive and to be avoided at all cost.
  • It takes time. Yes, but the payback is substantial, both in minimizing the impact of errors, and in maturing the BAs on the team.
  • We are already behind schedule. Would you rather let the customer catch the errors? Or worse yet, miss the errors?
  • I don’t know anything about that project. So you’ll limit your review to the more general, BA-level comments; that’s still valuable.
  • It’s hard to do. True. The more you practice, the less difficult it will be the next time.

In the previous article we looked the first check, Business Case Synchronicity. Let’s assume that your Business Requirements Document (BRD) syncs up with the business case, that is, benefits for doing the project have been identified, there is an expectation of Return on Investment, there is an expectation that the project will report on progress towards achieving the ROI claim. Moreover the Customer has identified the means for validating project success. Now it is time to perform the Sanity Check.

Requirements Document Sanity Check

This check verifies that the document is worth continuing to read at the business context level. If stakeholders could read this document and come away with different perceptions about what the project is about, that would result in an insane expenditure of time and money that can’t possible have a good outcome.

Check the box next to the item if you can answer ‘yes’ for the document you are reviewing. It sometimes helps to approach this review the way you approach testing; by looking for ways to willfully misunderstand the document.

Requirements Document Sanity Check

□ Does the Scope section identify what is in and what is out of scope?
□ Do the Scope statements describe what the project’s customer expects will get done – without specifying who is doing the work, or how the work will get done?
□Can you understand the scope of the project and does it make sense?
□ Do you believe that the stakeholders share the same understanding of the project’s scope?
□ Is there a Business Context Diagram that provides a high-level overview of the functionality in terms of the system, the business entities that will interact with the system and information that is exchanged between the entities and the system?
□ In the Business Context Diagram, are the Business Entities defined in a concise (1-3 sentences) manner that makes sense to the customer? Are the business Interactions equally concisely defined?
□ Is there a shared understanding of “what have we got now”? Does the BRD need an “as-is” process map?
□ Is there a process flow diagram that helps the reader understand the impact of the new (or changed) functionality that will come from this project?
□ If this project focuses primarily on moving data around, is there a high-level data model? This is especially important when the application being developed will be pulling data from multiple sources outside of the application.
□ Have the business rules been articulated? Business rules are essentially constraints that the business imposes on itself. Typically one or two business rules govern a set of requirements.
□ Do the requirements categories make sense? If the requirements are jumbled together like unmatched socks in a sock drawer, the readers of your BRD will be as unhappy as the person who can’t find a pair of matching socks on a Monday morning.
□ Are the business level requirements stated in terms that reflect the needs of the business (as opposed to identifying a solution)?
□ If the document has a mix of business (high-level) and functional/non-functional (low-level) requirements, are the business-level requirements clearly identified as higher level? You might want to put the business requirements in a section separate from the functional and non-functional requirements or they might be at a higher numbering level, e.g., requirement 2.0 is business-level, and requirements 2.1 through 2.6 are functional requirements that provide detail for requirement 2.0
□ If there is existing functionality that must not be changed in the course of executing the project, does the BRD identify what that functionality is?
□ Is there a requirement for baseline values to be recorded for the existing functionality so that a comparison can be made prior to deployment of new functionality to verify that no unwanted changes will occur?
□ If the Customer has an expectation for specific tests to demonstrate that the requested functionality has been achieved, are those tests reflected by one or more requirements to an appropriate level of detail?
□ In the project execution methodology, is there an expectation for a communication plan? There may be a need for an explicit requirement for a communication plan as a deliverable.
□ In the project execution methodology, is there a process for obtaining stakeholder approval of the BRD? It may be prudent to collect an email from each stakeholder documenting their approval and embed those emails in the appendix of the BRD so that the BRD can become a complete historical artifact.

For more information about Context Diagrams, see Software Requirements, Karl Wiegers, Microsoft Press, 2nd edition, 2003, Chapter 5.

If the project will change an existing business process then it is particularly important to have a map of the as-is process mapping so that there is a baseline to compare the future state to. This is not the time to document the to-be process because the solution hasn’t been chosen yet. If it is easier to convey the requirement with a to-be process flow, make the process flow as solution-agnostic as possible and indicate that the process flow is a hypothetical example.

Summary

Did the document pass the sanity check? If one or more of the items in the list above is not checked, there is some risk that the requirements document is insufficiently clear or is incomplete. When a BRD can’t pass a sanity check, it is time for brave BAs to “just say no” to pressure to proceed with the review of the requirements themselves, let alone approve the entire BRD.

If you are a peer reviewer, your job is help the BRD author determine the best course of action is to correct these flaws and errors. As a peer reviewer you are expected to return written feedback. Take time to explain what is missing, and provide examples of the misunderstanding that might ensue. It would be a kindness to provide feedback to the BRD’s author in advance so that they have time to think about your comments and perhaps initiate the conversations to get some of the questions resolved.

Once your BRD has passed the sanity check then and only then proceed to the review of the requirements statements themselves which we’ll cover in Part 3 to be published In November.

Use and Profit from Peer Reviews on Requirements Documents

  1. Business Case Synchronicity Check
  2. Requirements Document Sanity Check
  3. Requirements Statements Content Check [November, 2009]
  4. Requirements Document Housekeeping Check [December, 2009]

Don’t forget to leave your comments below


Cecilie Hoffman is a Senior Principal IT Business Analyst in the Business Analysis Center of Excellence, Symantec Services Group, Symantec Corporation. Cecilie’s professional passion is to educate technical and business teams about the role of the business analyst, and to empower the business analysts themselves with tools, methods, strategies and confidence. Cecilie is a founding member of the Silicon Valley chapter of the IIBA. She writes a blog on her personal passion, motorcycle riding, at balsamfir.com. She can be reached at [email protected].

Rebecca Burgess is the Business Process Methodology Analyst in the Commerce Lifecycle Transformation Office at Symantec and a Certified Six Sigma Black Belt. After many years of uncovering problems and determining root causes in software, she is now applying her BA skills to strategic process design and improvement. She can be reached at [email protected]

Business Analysis Benchmark – The Path to Success

The Business Analysis Benchmark is a large scale survey effort designed to assess the link between an organization’s maturity in requirements definition and management and project outcome. This year’s theme is The Path to Success; the study presents detailed findings on the impact of business requirements maturity and analyzes the strategies and tactics needed to implement enhanced requirements maturity.

“This survey is a testament to the need for investing in your requirements process to deliver value to your stakeholders.”
Scott Hebner, Vice President Marketing and Strategy, IBM Rational Software

The Requirements Maturity Model (RMM) is a means to benchmark an organization’s effectiveness in requirements definition and management by looking at maturity in six underlying capabilities. Like similar standards-based models, it classifies companies based on observed, tangible competency in each capability to make an objective assessment of overall maturity. Using this approach, the report found:

  • Requirements maturity improvement is highly correlated with improvement in development effectiveness.
  • Requirements maturity cannot be changed through continuous focus on only one underlying capability.
  • High requirements maturity companies can be found amongst the followers of many different approaches to development such as Agile, Iterative, Plan Driven (Waterfall),and Prototyping/Visualization centric methods.

The above findings validate the Requirements Maturity Model as a mechanism for identifying the impact of poor requirements practices on companies, quantifying the performance change expected for a particular organization’s situation, and, diagnosing the changes needed should a company choose to pursue a path of improvement. This report identifies both the strategy and tactics of enhancing requirements definition and management maturity. The statistics presented in the Business Analysis Benchmark not only debunk a number of commonly held beliefs about development effectiveness, they show that the average organization wastes a large proportion of its IT development budget due to poor requirements maturity. To be clear, 75% of organizations surveyed waste over one in three dollars spent in IT development and implementation annually as a result of to poor requirements maturity. These findings detail key issues and actions needed to recapture this wastage.

Key Findings of the Business Analysis Benchmark include:

  1. Requirements maturity has a strong positive correlation to EVERY major measure of development efficiency assessed. On-time performance, on-budget performance, on function performance,

    “This [report] was extremely helpful to me, not only to understand the findings relating to my current situation, but also what CEO and CIOs are interested in.”
    Carol Deutschlander,
    Home Hardware Stores Limited

    overrun magnitudes for each of the above, and project success rates all improve as requirements maturity increases. On average, performance virtually doubled on each of these metrics as organizations progressed from using an ad-hoc approach for requirements definition and management to having institutionalized and consistent competency in all capability areas.

    • Average on-time performance of technology projects increased by 161%.
    • Time overruns on projects reduced by 87%.
    • Average on-budget performance for technology projects improved by just over 95%.
    • Budget overruns reduced by just under 75%.
    • Percentage of projects that deliver the functionality needed by the business rose by just over 75%.
    • Average functionality missed dropped by approximately 78%.
  2. A total of 74.1 per cent of survey respondents were classified as immature Level 1 or Level 2 organizations (where the highest maturity Level is 5). These organizations waste 39% and 34% respectively of their development budget due to poor requirements definition and management maturity. This wastage, due to poor requirements maturity, will increase to over 50% of IT spending on development and a significant proportion of the maintenance budget in certain circumstances.
  3. Poor requirements definition and management maturity undermine organizational competitiveness. Organizations with poor requirements maturity expend far more time, budget, and management effort to achieve the same results as organizations with high maturity. For example, organizations with low requirements maturity achieve the business objectives of a project initiative a mere 54% of the time while taking 35% more time to achieve this poorer result. This impact may be so significant over time that it shifts fundamental financial performance metrics such as Return on Assets (ROA).It was found that Level 4 companies, on average, outperform the ROA of their peer group competitors by 10%.
  4. While this report discusses and busted a number of commonly held beliefs about requirements and development efficiency, two issues garnered significant attention and support from the report’s external review panel:

CIO’s cannot simply attempt to hire great analysts and expect the problem of poor requirements to go away. In fact, lower skilled people in a high requirements maturity company significantly outperform highly skilled people in a low requirements maturity company.

“I’ve worked carefully through the Benchmark Study. It’s terrific stuff — some of the conclusions are almost iconoclastic, and yet they make tremendous sense once you analyze them. And the RMM is an excellent tool — of course it does and should heavily parallel CMM / CMMI, but it also provides tremendous value added as you’ve applied/customized it to Business Analysis practice.”Senior Requirements Specialist Major Property & Casualty Insurance Company

Agile, Waterfall, Iterative, Prototyping/Visualization have immaterial performance differences for any given level of requirements definition and management maturity. There is a raging debate amongst development methodologists, each espousing one method over another. This study finds that changing development methods – in the absence of also improving requirements competence in the areas of process, techniques, staff, technology, organization and deliverables – only nominally improved or reduced overall success rates on projects. Excellent results have been achieved with all these approaches, and the findings of the Business Analysis Benchmark do not endorse any one method over another. The key issue for readers: the overall level of requirements maturity has a MUCH greater effect on project outcome than the development method selected.

The Business Analysis Benchmark describes the issues and impacts of each level of the organization, and the role each plays in moving a company forward along the path of maturity. This report has a preface that describes the survey, maturity model, and basic facts surrounding the impact of requirements maturity on project outcome. The remainder of the report is organized along the lines of a readership group, discussing the key findings as they relate to:

  1. The CEO: how does requirements maturity impact overall organizational competitiveness?
  2. The CIO: how does IT Leadership approach the major issues in making requirements definition and management change?
  3. The PM and BA Leadership: what is the effectiveness of various paths of change, and what are the required activities to bring improvement? In addition to this content, IAG has also asked a series of external reviewers to comment on survey findings. Some of these insights are captured in the call-out boxes in this article.

The Survey – How it was Conducted

Last year, over 22 million business and IT professionals in 80 countries, and using 10 languages, benefited from the statistics generated by the Business Analysis Benchmark.

This year’s survey theme – The Path to Success – identifies a roadmap for maturing requirements definition and management practices. This study is about getting repeatable success on strategic IT projects.

In Q2 of 2009, just under 550 companies chose to participate in the Business Analysis Benchmark – Path to Success survey, leading to 437 qualifying responses. This survey was designed with the intent of assessing the link between an organization’s maturity in requirements definition and management and project outcomes. The Business Analysis Benchmark statistics only include respondents that met the following three criteria:

  1. The company spends over $1 million annually in application development (net of hardware) or software implementation.
  2. The individual must have experience with business requirements and project management where net new functionality is added to the business.
  3. The company must have run at least four projects in excess of $250,000 in the last 12 months.

These criteria ensured that only experienced professionals with knowledge of requirements definition and management issues would be included in survey results. The results are weighted toward medium and large sized commercial companies, in North America. The sampling is summarized below:

Position

Executive: Head of IT, CIO, Head of Development, Line of Business Executive
Head of PMO or Project Manager
Head of Business Analysis Competency Center or BA
Other

Number of Employees in Company

1‐99
100 to 499
500 to 2,499
2,500 to 4,999
5,000 to 9,999
10,000+


Industry

Energy, Resources & Utilities
Financial Services
Insurance
Government & Social Services
Healthcare & Pharmaceutical
Manufacturing
Media & Industry Analysts
Military & Defense
Professional & IT Services
Retail, Transportation & Distribution
Software
Telecommunications
Education
Other


Region

United States
Canada
Western & Eastern Europe
India/Pakistan
Asia/Pacific
Africa (mainly South Africa)
Middle East
Central/South America
Total

% Respondents

12.2%
27.1%
52.5%
8.3%
100.0%

% Respondents

6.2%
14.3%
20.3%
11.5%
8.5%
39.2%
100.0%

% Respondents

3.9%
17.7%
9.9%
8.7%
8.0%
6.0%
1.1%
1.4%
14.4%
5.0%
9.2%
6.7%
2.1%
6.0%
100.0%

# Respondents

233
116
26
24
22
6
5
5
437

Don’t forget to leave your comments below


Keith Ellis is the Vice President, Marketing at IAG Consulting (www.iag.biz) – and author of the Business Analysis Benchmark – where he leads the marketing and strategic alliances efforts of this global leader in business requirements discovery and management. Keith is a veteran of the technology services business and founder of the business analysis company Digital Mosaic which was sold to IAG in 2007. Keith’s former lives have included leading the consulting and services research efforts of the technology trend watcher International Data Corporation in Canada, and the marketing strategy of the global outsourcer CGI in the financial services sector.

For access to and to download your free copy of the full Business Analysis Benchmark study, please click on www.iag.biz.

The Three Most Important BA Factors

“Location, Location, Location” is a phrase used by Real Estate experts regarding the three most important factors in determining the desirability of a property. I have a phrase for the three most important factors in determining desirable business analysis, “Iteration, Iteration, Iteration.” This thought hit me yesterday when I was facilitating a meeting. There I am up in front of 20 people in a conference room and I see someone sleeping. Eyes shut, head bobbing…out! It happened to be my PM…yikes. Side note – I don’t worry if one person falls asleep in my meetings. When two or more start crashing I know I need to switch things up!

“Sleeping Beauty” reminded me of my requirements reviews earlier in my career. Participants did not fall asleep, but they told me those meetings were a bear to get through and mentally checked out 20 minutes into them. In the past I would wait until I had everything documented before having a review. The old throw it over the fence mentality. Knowing the importance of having key stakeholders review the requirements, I adapted my approach and started reviewing small chunks of requirements at a time. Then when all the requirements are complete a final review is painless.

Here is an illustration of a basic flow to explain the process.

3mostimportant_sml
Click for larger image

There are two huge benefits with this approach in addition to avoiding people falling asleep in your meetings.

  1. You validate that you are headed down the right path. By reviewing a draft you ensure you are headed down the right path. If you wait until the end to review what you captured a lot of time will be wasted if you missed the mark.
  2. Stakeholders can absorb small chunks. It is very hard in one sitting to absorb and consume an entire project’s requirements. By reviewing in iterations, it allows the stakeholder to focus on one area and really review the document.

When I speak to BAs about this approach a common reaction is, “I don’t like showing a customer something that is not perfect.” My response: “Get over it!” Requirements do not need to be perfect. They need to be accurate and enough for the team to build the right solution. It is better to find any issues early and make corrections.

I have also run into QA analysts and developers who don’t want to be bothered reviewing requirements before they are “complete”. You know what I tell them? Right, “Get over it!” Requirements definition is a team activity. All team members and business stakeholders need to work together to elicit, clarify and communicate requirements.

If you want to be a desirable business analyst …iterate, iterate, iterate! You’ll stay on track, and you’ll have a higher rate of success, making sure the entire team understands and agrees on the requirements, which will lead to successful projects.

All the best,

Kupe

Don’t forget to leave your comments below


Jonathan “Kupe” Kupersmith is Director of Client Solutions, B2T Training and has over 12 years of business analysis experience. He has served as the lead Business Analyst and Project Manager on projects in various industries. He serves as a mentor for business analysis professionals and is a Certified Business Analysis Professional (CBAP) through the IIBA and is BA Certified through B2T Training. Kupe is a connector and has a goal in life to meet everyone! Contact Kupe at [email protected].

User Stories – Large Misconceptions. Part 2.

In my last post I discussed two misconceptions related to user stories. First was the notion that user stories are static artifacts and the second was that they stand-alone in representing the nuance of requirements. Both of these views are false. In this month’s post, I want to wrap up with two more misconceptions. One is that only a customer surrogate or product owner can write them. The other is underestimating the power of the acceptance tests as a clarifying vehicle for the story.

Anyone on the Team Can (and Should) Create and Update User Stories!

I’ve seen the position of “only the product wner can write stories” taken time after time when I’m coaching agile teams. I’ll enter a Sprint Planning session and we’ll look at the Product Backlog. The team will complain about the backlog not being detailed enough or containing the ‘right’ set of stories. We’ll spend literally hours reviewing them and debating their intent until we run out of time in our time-box and need to reschedule the planning session.

Inevitably these teams point to the Product Owner as being the problem-complaining that they were not prepared.

As an agile coach I always challenge the entire team in these situations. It’s every team member’s responsibility and privilege to write and refine the stories on their product backlog. You see, user stories don’t simply capture ‘requirements’. Instead, they capture all work (let me repeat that) all work that will be undertaken by the team in order to meet the business’s expectations for the project.

Given this, it’s no wonder the product owner (or BA) can’t fill in a complete set of stories. Nor do they have the functional expertise that the team has in areas of software analysis and design, or development, or testing, or converging a product for customer use.

The user stories must be truly owned by the entire agile team. Sure, the product owner will probably spend significant time writing, refining, and ordering them. But consider that to be simply a “seeding pattern”. The stories aren’t truly complete until the team has iteratively refined them together. In Scrum and oft used term for this is – Grooming the Backlog. This is where the team gathers, either as a group or individually, and refines the stories encompassing the backlog.

Underestimating the Power of Acceptance Tests

Much of the focus of writing user stories is on the story-side or behavior-side of the card. Often there is little to no investment in developing acceptance tests (Confirmations) that are placed on the back of the card.

Why is that? My view is that defining test cases is hard for most teams to grasp. It’s sort of an afterthought and not everyone gets the intent or power of acceptance tests. I consider them more important than the user story description, and here are some helpful ways to view them:

  • Consider them mini-UAT tests at a story or feature level
  • They confirm Sprint Done-Ness from the perspective of the customer and the tester
  • They enable testers and BAs on the team to quantify key conditions that the software must meet
  • They drive the design and capability of the software; consider them defining the business logic for a story
  • They are a collaboration driver between developer(s), tester(s), BA(s), and product owner
  • They are either working or not; there is no in-between

There’s quite a bit of debate surrounding how many acceptance tests should be defined for each story. Certainly, they are not intended to exhaustively test the story-so they are not a complete list of test cases. I usually recommend somewhere between three and seven acceptance tests per story as determined by the team. One or two is certainly too few, and 25 clearly too many.

There is also debate surrounding how to phrase the acceptance tests. I’ve seen the following patterns for them across teams’ I’ve been coaching:

  • Verify that…”some behavior”
  • Confirm that…”again some behavior”
  • If I do this…this results
  • Under these preconditions, confirm that…”some behavior”

What’s important here, truly important, is not the phrasing, but that you define and confirm acceptance tests on a story by story basis. It drives quality collaboration and quality results within your agile teams.

Wrapping up – I hope I’ve clarified these four misconceptions and ultimately improved your user story writing.

Don’t forget to leave your comments below


Robert ‘Bob’ Galen is the President and Principal Consultant of RGCG, L.L.C. Bob has held director, manager and contributor level positions in both software development and quality assurance organizations. He has over 25 years of experience working across a wide variety of software technology and product domains. Bob can be reached at [email protected].