Skip to main content

Tag: Risk Management

Why Cyber And Physical Security Needs To Be Top Of Mind For Business Analysts

Cybercrime costs organizations $2.9 million every minute and costs major businesses $25 per minute due to data breaches.

It is no secret that cybersecurity is a top priority for all businesses, especially those adopting emerging cloud based and IoT technologies.

Cyber and physical security are becoming increasingly linked concepts, and business analysts must be prepared to include physical and digital security in any project they oversee. Keep reading to understand how you can implement a physical and digital converged security strategy in your business.


Why Business Analysts Should Have Cybersecurity Knowledge

Cybersecurity is part of risk management and should be included in every project your company oversees.

Since cloud-based technologies are included more in physical security strategies and are necessary to support hybrid working strategies, business analysts need to understand cybersecurity better.

The business analyst is the technical liaison between the project manager and the technical lead. The business analyst must be able to straddle both worlds of project management and the technical side of the operation.

By ensuring a thorough knowledge of cyber and physical security, the business analyst can improve communications and create swifter operating procedures. Business analysis isn’t just common sense and requires knowledge of key aspects within a business’ infrastructure.

Merging cyber and physical security is necessary to meet modern security demands, as the internet of things (IoT) and cloud-based technologies are making businesses more exposed to hacking. The data analysts need to evaluate is hosted on cloud-based platforms that require protection to prevent a breach.


How To Implement Better Cybersecurity Practices

Here are some of the best tips for business analysts to modernize their security strategy to handle physical and digital security threats. Better cybersecurity means more trust from stakeholders and protection from potential losses caused by the exposure of sensitive data.


Merging Physical And Digital Security

With the increased adoption of the internet of things (IoT) and cloud-based technologies, a restructuring is required within the facets of your business’ security staff. Housing digital and physical security teams separately can make modern security threats increasingly challenging to handle effectively.

With assets becoming both physical and digital, your physical security staff and IT team may have difficulty determining which security elements fall under their jurisdiction. By merging both teams, you can improve their communication and create a physical and digital security strategy that allows for faster response to physical and digital security threats.

You can integrate cybersecurity software with physical security technologies to prevent unauthorized users from accessing critical security data. You will modernize your cloud-based security system and make it impervious to physical and digital security breaches.

Using Physical Access Control To Protect Digital Assets

Your office building is home to many digital assets that store sensitive data. If an unauthorized user gains access to your physical servers, your data can be vulnerable. However, you can install door locks that prevent unauthorized users from entering your building by installing access control technology.

Mobile credentials can be used for access control security, which has many benefits. Users will be able to download an app and receive their mobile access key, rather than waiting for a physical key or keycard to be given to them. Bluetooth access readers can detect mobile devices stored in pockets and bags, meaning that your employees can enter the building without even removing their device and presenting it to the reader. Smart door locks can enhance your security and increase the convenience of your employees.

Using A Zero-Trust Security Strategy

A zero-trust security strategy does not assume the trustworthiness of employees and building visitors. Zero-trust can be applied to both physical and digital security strategies to remove the possibility of an internal security breach.

Access control door locks can be installed internally in your building to ensure that permissions to areas containing sensitive data and company assets are only granted to users that require access. The same principle can be applied to your cybersecurity strategy, only giving users permissions to access the data they need to perform daily operations.

A zero-trust security strategy is essential to eliminate the risk of an internal security breach that could cost your business money and lose your stakeholders’ trust.


Cybersecurity Training

Data is more vulnerable in a hybrid or remote work model, which means your employees should receive training on keeping their devices and networks protected. You can start by providing basic cybersecurity training covering the following topics:

  • How to avoid phishing scams.
  • How to set strong passwords.
  • How employees can keep their device software up to date to avoid vulnerabilities.

By providing your employees with basic cybersecurity training, you can significantly reduce the likelihood of a cybersecurity breach caused by human error.


Business analysts face new challenges when it comes to overseeing projects that are sufficiently protected in terms of both the physical and digital. A converged security strategy can combine physical and digital security strengths and help to futureproof your business against the changing nature of security threats.

The Delicate Balance

As a new analyst in the User Account Management team in Playtech I often find myself making a challenging choice. On the one hand there is the option to build a new feature, create a new configuration, etc. On the other hand, there is the option to create a new automated rule and custom tags to simulate the same outcome. The dilemma of dev vs no dev. And it is often not a clear-cut case.

As fellow analysts or product owners know, a proper feature for a set of use cases is… neat and tidy. It is easy to refer to when answering questions from users. It can be reused. It can be further developed. It feels like something got done.


The challenging alternative is to choose an already existing powerful tool (for example Automation Service Engine) that can be used to create business rules based on defined parameters. This is awesome! No more dev for us. We have created a tool that replaces custom dev. But wait, we need to do some small tweaks to this tool before it becomes capable of achieving the desired outcome.

After having done some soul searching on this eternal dilemma, I have found it comes down to these 5 considerations:

Backwards Compatibility

We have 118 operators with a total of 385 brands. When adding new configuration options, we need to be extra careful to make sure that current setups do not suffer. Often this means we need to come up with a default value that would maintain the status quo in addition to the new value that would enable the new behavior. Boy have I ever felt the caricature of “repairing an airplane mid-flight” to hold truer.

Let’s take Self-Exclusion for example. It is already quite complex and at the same time business-critical. If we make a minor change and this feature then starts acting even in a slightly different way, we’ll potentially have several major regulatory breaches that can end up in large fines or even lost operating licenses for whole countries.


If we invest time here now, will it be reusable for future cases? Will there be similar requests in the future? In what ways do we need to design flexibility into the feature? This comes down to the infamous “gut feeling”. I have a gut feeling that in the future we will see a lot more focus on active intervention in regulations. Licensees would have to actively monitor their player base to determine at-risk players and proactively suggest different self-governed limits or indeed apply limits without consent (but still with a duty to inform). This would mean a well-rounded interface that would incorporate communication features, at risk player discovery (using machine intelligence), flagging said players, and responsible gaming features well at hand. This would be a considerable interdisciplinary effort. Would it be worth the investment?


We are the guardians of the system. We should know the best ways to solve business and regulatory requirements by using either existing features or creating new ones where appropriate. The more ad hoc solutions we offer, the more cumbersome it becomes to maintain. The limitations of the human memory in combination with the problems of organizational memory mean we would soon have no overview how different configurations are achieved or know how the system behaves when using combinations of said ad hoc solutions that are not specifically designed to work together. I would argue that if there is a combination that is only meant to work together and excludes all other options, then this can effectively be considered a new separate feature. The goal is to have fewer dependencies and fewer configuration options within one feature.

Fast Release Cycle

Every party involved in software product development wants to have a solution that requires the least amount of work possible. Our time is extremely valuable, and the cost of missing opportunities is even higher.

The time constraints also come into play regarding go-live dates, certification deadlines and so on. If there is a faster option for developing a new feature that must be adopted by all parties down the line… often the best solution is to go ahead with the workaround. Sometimes the workaround becomes permanent, sometimes we get to phase two.


I am picturing the IMS setups of existing licensees as brides on their wedding day. It took months of preparation and multi-party negotiations to get every detail right (and somehow still over budget?) If possible, no-one must touch the setup that works. Do not fix what is not broken, right? Mostly people are just too afraid to upset the bride.

But it is also our hope that when we create a cool new feature that would benefit existing licensees, they may become interested in improving their business processes, security, or customer experience. Indeed, they are paying for a continuously developing platform so they can benefit from new features, and we are actively promoting them.

These features need to be extremely easy and fool-proof to adopt. If a client gets burned once they will be even more hesitant to change their working setup the next time.

No Clear Answer

Usually, you can get some sort of an answer to almost any problem by creating the famous Pros and Cons table. The problem is – I cannot draw a table (I tried) with all these considerations and two columns – one to tick if in favor of dev and other for no dev. It is not a binary choice, but more like a scale. Sometimes the scale is only slightly in favor of one or the other in each aspect, sometimes more decidedly so.

Finally, I would like to give you some insight into how I adapted to this situation – maybe fellow analysts will recognize the steps below:

  1. A new-to-me, established, fully functional, super customizable system. There cannot be a problem too big that cannot be solved with a few simple tweaks.
  2. Uh-oh. Tweaks are not enough. New features are the order of the day! Everything should be solved with new features.
  3. Wait… could we add a small checkbox here in this feature to create a configuration option?
  4. Too many configurations! Who on earth can keep track of them all?
  5. Aha! A rule engine… and tags! Let’s see how many problems could be solved just by using them?
  6. Write an article
  7. Continue to learn and develop.

Waterfall vs. Agile: A Relative Comparison

The merits of Agile versus Waterfall are well documented. However, it is useful to understand the relative differences between the paradigms and how they impact Business Analysis – particularly if you work in an environment where both approaches are used.

This article attempts to provide a visual, relative comparison between:

  • A traditional Waterfall method that moves through defined phases that include Requirements, Design, Implement and Verify, with a defined gateway denoting the point at which the method moves from one phase to the next.
  • An Agile method that aligns with the 12 principles of the Agile Manifesto.

The approaches are compared across three areas that matter to many Business Analysts:

  • The relative effort is involved in specifying and managing requirements.
  • The relative risk posed by ill-defined requirements.
  • Time to realize benefits.

Requirements Management

The timeline for discovering, specifying, and managing requirements differs greatly between the two approaches. A traditional Waterfall approach includes the requirements phase early in the initiative where the focus is on requirements specification and management activities. At the end of this phase, the ability to change requirements is limited. Therefore, most of the effort to elicit and manage requirements happens during this early phase.

By comparison, requirements elicitation and management activities for an Agile initiative are more evenly distributed over the life of the initiative as requirements are constantly reviewed, updated, and prioritized.

The relative requirement management effort over time for each approach is shown in Figure 1.

Figure 1 – Waterfall vs. Agile: Relative requirements management effort over time

Relative Risk

Missing, incorrect, and/or otherwise ill-defined requirements put the delivery of fit-for-purpose products at risk.  However, the relative risk associated with ill-defined requirements is quite different when comparing Waterfall and Agile approaches.

The risk posed by ill-defined requirements for a traditional Waterfall approach is lower during the requirements phase of the initiative as this is the time when requirements can be added and changed without impacting other areas. After this phase, the risk posed by ill-defined requirements dramatically increases and continues to increase for the duration of the initiative. By comparison, the risk posed by ill-defined requirements to Agile approaches is largely constant throughout the initiative. Figure 2 shows the relative risk of both approaches side-by-side.

Figure 2 – Waterfall vs. Agile: Relative risk posed by ill-defined requirements over time

However, it is worth analyzing the relative risk by its constituent components – the likelihood requirements will be ill-defined and the impact of having ill-defined requirements.

For a traditional Waterfall approach, all the effort to elicit and document requirements happens at the beginning of an initiative with limited mechanisms in place to revise or revisit requirements in later stages – regardless of what new information may come to light. This means that it is comparatively more likely (i.e. higher likelihood) that there will be ill-defined requirements. The likelihood of having ill-defined requirements is fairly consistent throughout the initiative as it is the result of a constraint imposed by the methodology.


In contrast, the impact of ill-defined requirements is low for Waterfall approaches during the initial requirements phase of the project as this is when there are mechanisms in place to actively review and change requirements. After this point, the impact of ill-defined requirements can increase quite dramatically (particularly for initiatives that involve the procurement of resources and/or base products as per the requirements as part of the next phase) and continues to increase throughout the life of the initiative. This is because the cost of changing products increases as the initiative progresses through the design, implement and verify phases.  This is demonstrated in Figure 3.

Figure 3 – Waterfall: Likelihood and impact of ill-defined requirements over time

In comparison, Agile methods include mechanisms to incorporate new information into requirements throughout the life of the initiative, meaning the likelihood of ill-defined requirements decreases as the initiative progresses. By comparison, the impact of ill-defined requirements increases over the life of the initiative as products are incrementally released. This is shown in Figure 4.

Figure 4 – Agile: Likelihood and impact of ill-defined requirements over time

By the end, the impact of ill-defined requirements on comparable initiatives is relatively similar for both Waterfall and Agile methods – it is the likelihood that contributes to the overall difference in relative risk.

Benefits Realisation

Another key point of difference between Waterfall and Agile methods is when benefits are realized. For Waterfall initiatives, benefits cannot be realized until after its core products are fully delivered. There is limited opportunity for early benefits realization in traditional Waterfall initiatives. By comparison, Agile methods offer opportunities to realize early benefits with the incremental delivery of products, as demonstrated in Figure 5.

Figure 5 – Waterfall vs. Agile: Relative time to the realization of benefits

Why does it matter?

So why is it useful to understand the relative differences between Waterfall and Agile approaches? There are a few ways increased understanding of the relative differences between methodologies can help, including:

  • Resource Planning – helping you to plan and allocate resources where they are needed most based on the approach being used.
  • Communication – better able to describe the relative merits and risks of an approach to stakeholders.
  • Arguing your case – provide some talking points to help you argue for an alternative approach.
  • Assessing alternatives – provide a basis for assessing alternative approaches and for tailoring methodologies.


This article has provided a relative comparison between Waterfall and Agile across three areas. In comparing Waterfall and Agile paradigms in relative terms this article does not seek to promote one over the other – both have their place. In addition, this analysis has not accounted for all the available variations, flavours, and mashups of approaches. However, understanding the relative differences between the basic paradigms can assist when preparing and planning to work with a specific methodology – particularly in environments where analysts may be expected to work with multiple different approaches.