What Is Shift Left Testing

What Is Shift Left Testing


The shift left testing essentially means “test often, and start as early as possible.” The “shift left” testing movement is about pushing testing toward the early stages of software development. By testing early and often, a project can reduce the number of bugs and increase the quality of the code. The goal is to not find any critical bugs during the deployment phase that require code patching.

Shift-left testing advocates that testing should be placed at the start of development, instead of at its end. In true software agile development, there shouldn’t be phases, but instead continuous activities occurring in short, iterative cycles. Shift left testing in agile is all about small code increments. The agile methodology includes testing as an integral part of the shorter development cycle. Therefore, shift left testing fits nicely into the agile idea. The testing engineer has to perform testing after each code increment—often referred to as a two-week sprint.

Some organizations like to push shift left testing even further toward the coding phase. A good approach to adopt is test-driven development. Test-driven development requires you to first write the tests for the piece of code you want to develop. Therefore, you can immediately verify the validity of your code.

Another way of pushing testing further left includes the use of static analysis tools. A static analysis tool helps to identify problems with parameter types or incorrect usage of interfaces.

Furthermore, testing experts believe that behavior-driven development (BDD) can accelerate the shift left movement. BDD defines a common design language that can be understood by all stakeholders, such as product owners, testing engineers, and developers. Therefore, it enables all involved stakeholders to simultaneously work on the same product feature, accelerating the team’s agility.

Benefits of Shift Left Testing

So, what are the benefits of shift left testing?

  • Find bugs early on in the software development life cycle
  • Reduce the cost of solving bugs by detecting them early on
  • Gain a higher-quality product as the code contains fewer patches and code fixes
  • Have fewer chances that the product overshoots the estimated timeline
  • Provide higher customer satisfaction as the code is stable and delivered within the budget
  • Maintain higher-quality codebase

Another great benefit of shift left testing is the ability to use test automation tools. As we want to test early and often, test automation helps you accomplish this goal. We don’t want to overload our testing team with manually testing every new feature the development team introduces.


You can read more about Shift Left Testing here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!

Kubernetes Monitoring – 5 Key Metrics

Kubernetes Monitoring – 5 Key Metrics


Kubernetes is rapidly becoming the most important infrastructure platform in the modern IT environment. Known as K8s, it is an open-source system for automating deployment, scaling, and management of containerized applications.

How Kubernetes works

  1. When developers create a multi-container application, they plan out how all the parts fit and work together, how many of each component should run, and roughly what should happen when challenges (e.g., lots of users logging in at once) are encountered.
  2. They store their containerized application components in a container registry (local or remote) and capture this thinking in one or several text files comprising aconfiguration. To start the application, they “apply” the configuration to Kubernetes.
  3. Kubernetes job is to evaluate and implement this configuration and maintain it until told otherwise. It:
    • Analyzes the configuration, aligning its requirements with those of all the other application configurations running on the system
    • Finds resources appropriate for running the new containers (e.g., some containers might need resources like GPUs that aren’t present on every host)
    • Grabs container images from the registry, starts up the new containers, and helps them connect to one another and to system resources (e.g., persistent storage), so the application works as a whole
  4. Then Kubernetes monitors everything, and when real events diverge from desired states, Kubernetes tries to fix things and adapt. For example, if a container crashes, Kubernetes restarts it. If an underlying server fails, Kubernetes finds resources elsewhere to run the containers that node was hosting. If traffic to an application suddenly spikes, Kubernetes can scale out containers to handle the additional load, in conformance to rules and limits stated in the configuration.

Here are five metrics to manage your Kubernetes environments.

Kubernetes Cluster Metrics

Monitoring the health of a Kubernetes cluster can help you understand the components that impact the health of your cluster. For example, you can learn how many resources the cluster uses as a whole and how many applications run on each node within the cluster. You can also learn whether your nodes are working well and at what capacity.

Here are several useful metrics to monitor:

  • Node resource utilization—metrics such as network bandwidth, memory and CPU utilization, and disk utilization. You can use these metrics to find out if you should decrease or increase the number and size of cluster nodes.
  • The number of nodes—this metric can help you learn what resources are being billed by the cloud provider and discover how the cluster is used.
  • Running pods—by tracking the number of running pods, you can understand if the available nodes are sufficient to handle current workloads if a node fails.

Kubernetes Pod Metrics

The process of monitoring a Kubernetes pod can be divided into three components:

  • Kubernetes metrics—these allow you to monitor how an individual pod is being handled and deployed by the orchestrator. You can monitor information such as the number of instances in a pod at a given moment compared to the expected number of instances (a lower number may indicate the cluster has run out of resources). You can also see in-progress deployment (the number of instances being switched to a newer version), check the health of your pods, and view network data.
  • Pod container metrics—these are mostly available via cAdvisor and exposed through Heapster, which queries each node about the containers that are running. Important metrics include network, CPU, and memory usage, which can be compared with the maximum usage permitted.
  • Application-specific metrics—these are developed by the actual application itself and relate to specific business rules. A database application, for example, will likely expose metrics on the state of an index, as well as relational statistics, while an eCommerce application might expose the data on the number of customers online and the revenue generated in a given timeframe. The application directly exposes these types of metrics, and you can link the app to a monitoring tool to track them more closely.

State Metrics

kube-state-metrics is a Kubernetes service that provides data on the state of cluster objects, including pods, nodes, namespaces, and DaemonSets. It serves metrics through the standard Kubernetes metrics API.

Here are several aspects you can monitor using state metrics:

  • Persistent Volumes (PVs) – a PV is a storage resource specified on the cluster and made available as persistent storage for any pod that requests it. PVs are bound to a certain pod during their lifecycle. When the PV is no longer needed by the pod, it is reclaimed. Monitoring PVs can help you learn when reclamation processes fail, which signifies that something is not working properly with your persistent storage.
  • Disk pressure—occurs when a node uses too much disk space or when a node uses disk space too quickly. Disk pressure is defined according to a configurable threshold. Monitoring this metric can help you learn if the application truly requires additional disk space or if it prematurely fills up the disk in an unanticipated manner.
  • Crash loop—can happen when a pod starts, crashes, and then gets stuck in a loop of continuously trying to restart without success. When a crash loop occurs, the application cannot run. It may be caused by an application crashing within the pod, a pod misconfiguration, or a deployment issue. Since there are many possibilities, debugging a crash loop can be a tricky effort. However, you do need to learn of the crash immediately in order to quickly mitigate or implement emergency measures that can keep the application available.
  • Jobs—components designed to temporarily run pods. A job can run pods for a limited amount of time. Once the pods complete their functions, the job can shut them down. Sometimes, though, jobs do not complete their function successfully. This may happen due to a node being rebooted or crashing. It may also be the result of resource exhaustion. Monitoring job failures can help you learn when your application is not accessible.

Container Metrics

You should monitor container metrics to ensure containers are properly utilizing resources. These metrics can help you understand if you are reaching a predefined resource limit and detect pods that are stuck in a CrashLoopBackoff.

Here are several container metrics that you should monitor:

  • Container CPU usage—learn how much CPU resources your containers are using in relation to the pod limits you have defined.
  • Container memory utilization—discover how much memory your containers are utilizing in relation to the pod limits you have defined.
  • Network usage—detect sent and received data packets as well as how much bandwidth is being used.

Application Metrics

These metrics can help you measure the availability and performance of the applications running in pods. The business scope of the application determines the type of metrics provided. Here are several important metrics:

  • Application availability—can help you measure the uptime and response times of the application. This metric can help you assess optimal user experience and performance.
  • Application health and performance—can help you learn about performance issues, latency, responsiveness, and other user experience issues. This metric can surface errors that should be fixed within the application layer.

You can read more about Kubernetes Monitoring here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!

9 Tips for Modernizing Aging IT Systems

9 Tips for Modernizing Aging IT Systems


1. Count the fails

It’s not the age of the system, necessarily, that is the biggest problem. Where it fails to do your bidding is the real issue.

“The first step in modernizing your IT system is to identify the specific failings of your current legacy system,” says Mo Hafez, senior solutions engineer at Expereo, an internet, cloud connectivity and SD-WAN provider. “Whether your specific problems or concerns are security, infrastructure, or a combination of those problems, identifying them early will ensure that your modernization efforts will be as efficient as possible.”

“The hardest, most challenging part involves setting your and your team’s expectations. A transformation requires taking little bites, one area at a time,” says Philip Morehead, director of product at Nexient, an agile software development provider.

2. Compare apples to barrels

Once you’ve identified where the failures are in aging systems, compute the costs in fixes, patches, upgrades, and add-ons to bring the system up to modern requirements. Now add any additional costs likely to be incurred in the near future to keep this system going. Compare the total to other available options, including a new or newer system.

“While this isn’t a one-size-fits-all approach, the last 2.5 years have proven just how quickly priorities can change,” says Brian Haines, chief strategy officer for FM:Systems, an integrated workspace management system software provider. “Rather than investing in point solutions that may serve the specific needs of the organization today, a workplace tech solution that offers the ability to add or even remove certain functions later to the same system means organizations can more efficiently respond to ever-changing business, employee, workplace, visitor and even asset needs going forward.”

3. Accelerate the automation

Make smart automation plans a part of your overall implementation strategy for modernizing your legacy systems.

“When it comes to automation, it’s all about building value to drive value. To modernize aging systems, there has to be a proactive approach to automation and understanding the ripple effects that come with it — then training for them across,” says Karlo Bustos, vice-president of Professional Services at Board Americas, a decision-making platform provider.

4. Do a madness check

You’re not saddled with legacy systems because you have a fetish for old and cranky tech. It’s much more likely that you inherited that bag of treachery, became a victim of way too many budget cuts, or got sucked into a black-hole mandate. Other types of madness may also be to blame.

“A significant challenge for IT experts is that some organizations have been previously unable to replace legacy systems due to regulatory or organizational mandates,” says Rod Simmons, vice president of product strategy at Omada, a provider of Identity Governance and Administration (IGA) software. “Many organizations also succumb to the ‘sunk cost’ fallacy. They’ve invested so much time, money and energy into legacy systems that are barely working. Not to mention they are spending so much time trying to make what they have work, that it feels impossible to consider how things could be better.”

5. Get new keys

When you modernize legacy tech you can accidently create a few more gaps in its security. One such security flaw can spring from reusing old security keys. Either the keys themselves are already compromised, or you forget to destroy them when you get or make new keys and the old ones get compromised later.

Current encryption keys may be enough for now, given their enormous size and the inherent difficulty in cracking them. However, harvesting attackers are very patient and can be sitting on your system waiting on quantum computing to come online. If that’s a concern for your company, you may want to investigate the quantum keys that are already available.

While you’re tinkering around to make the system better, fit it with new security keys of some type, pay attention to whom you give assess to these new keys, and destroy the old keys.

6. Be fickle about partners

The reality is that you’ll need more partners and sometimes different partners as dictated by the needs of your business over time. There is no discernable advantage to being overly loyal or sentimental about any given partner, no matter how familiar or how cozy the relationship in the past.

Also look for ways to replace or augment partners with automation, AI, or simplified functionality.

7. Decouple data

Legacy applications and platforms are of the data silos. This is a potentially fatal flaw for any effort to modernize or optimize now — and going forward. So, look hard at freeing up that data and breaking down silos everywhere you can.

“De-couple data stores that are used by many monolithic applications and consolidate behind enterprise accessible services such as APIs,” advises Mark Schlesinger, senior technical fellow at Broadridge Financial Solutions.

Break all the black boxes, too.

“Mainframes are often called ‘black boxes’ of info for a reason: They’re webs of personalized code that have been managed by countless developer hands that have either exited their posts or retired altogether,” says Tim Jones, managing director of application modernization at Advanced, an international provider of application modernization services.

You may need a partner that is an expert in this type of black box cracking to help you get this done.

8. Double down on containers

Containers can make modernization easier, but they can also be used to double your deployments quickly and efficiently.

“Use containers in lower-level public cloud environments to build products that will be deployed to production on the private cloud, as well as for products that will be deployed in production to the public cloud when time to market is critical and/or when future portability is expected to be necessary,” says Mark Schlesinger, senior technical fellow at Broadridge Financial Solutions.

9. Reach for new tools even to fix old tech

Most modernization projects these days are too big to do fast and yet must be completed quickly. Your set of familiar tools may not be enough to get you across the finish line in time. Don’t hesitate to reach for new tools to make the work quicker.

“By utilizing modern IT models, new approaches to IT like DevOps or site reliability engineering [SRE], and particularly new advancements in technology like AIOps, more IT teams are leveraging AI-driven intelligence and automation to make quick and accurate decisions, allowing them to deliver resiliency despite immense pressures,” says Dinesh Nirmal, general manager for IBM Automation.


You can read more about Modernizing Aging IT Systems here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!

Digital Transformation and its Impact on IT

Digital Transformation and its Impact on IT


There are six areas of innovation that are already impacting, or are soon likely to impact, how business is managed and accomplished in our increasingly digital world:

Cybersecurity. 

As organizations deal with the risks and vulnerabilities posed by digital transformation, cybersecurity is required to continue to advance to keep pace with continually evolving and increasingly sophisticated cybercrime methods. Artificial intelligence (AI), machine learning (ML), and robotic process automation (RPA) can help detect malware and ransomware. For instance, advanced algorithms are aiding in tighter security, especially when combined with automation tools.

Wireless 5G technology. 

5G is a prerequisite for our continued digital transformation, with global 5G smartphone subscriptions anticipated to reach 1B in 2022, and over 3B by 2026. 5G allows for 10-20 times higher data speeds, significantly greater device connectivity, and a greatly improved user experience with little to no lag times. 5G is also essential for supporting the continued advancement of augmented and virtual reality (AR/VR), which are destined to be game changers for how we work, live, and play.

Artificial intelligence. 

Already a component of the aforementioned cybersecurity, AI’s footprint is much broader and with equally deep impact in other business areas, with the market projected to reach over $126B by 2025. At a high level, AI-based business tools allow organizations to forecast more precisely, in turn helping to improve operations, project sales and revenue more accurately, and recognize market trends much faster.

Natural language processing. 

Centered on interactions between computers and the human language, NLP is one of the most interesting and widely used AI technologies (think Siri and Alexa virtual assistants as well-known examples). However, NLP is still formative, and as advancements continue, companies will create machines with the ability to engage with humans in a way that will disrupt multiple aspects of our business and personal lives in ways many might currently deem unimaginable.

Metaverse. 

The virtual world where people can work, play, and interact with one another through immersive online experiences will continue to see its popularity and sophistication grow. AR/VR gaming and digital marketplaces that include livestream shopping, virtual art galleries, and digital real estate (the latter alone surpassing over $500M in 2021 and expected to more than double to over $1B in 2022) will continue to expand and integrate with social networks over the next several years. Again, AI and ML technologies will assist with powering the metaverse forward.

Web3.

Something of a rebrand of blockchain technology, Web3 offers benefits to businesses and individuals, including complete ownership of data, the ability to allow users access to their data across multiple apps, and full data encryption allowing for enhanced security and greater transparency. Decentralized systems enabled by Web3 are also benefiting creators and artists who can leverage non-fungible tokens (NFTs) to market their products for fair earnings, shared ownership, and autonomy. As the “creator” economy grows along with Web3, more people will be enabled to build and create.

As digital transformation continues to disrupt how we do business, organizations must be constantly aware of new technologies in order to learn what to adapt and when, as it makes the most sense for their organizational roadmap and vision.

Of these technologies, advanced cybersecurity solutions are already a “must have” to ensure full customer data protection. Other technologies, such as 5G and AI/ML, are also becoming more widely available and may soon join the category of innovations that organizations must integrate in order to thrive from a competitive and customer satisfaction perspective.


You can read more about Digital Transformation here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!

Two-Factor Authentication by 2023

Two-Factor Authentication by 2023


To improve software security, organizations must force two-factor authentication sooner than later, as a single password may be the only thing protecting your data.

GitHub took a step toward improving software security, announcing that contributors to all code repositories must use two-factor authentication (2FA) by the end of 2023. Employing 2FA increases account security, but developers, software vendors, and customers should consider what they can do now to strengthen their software, both for their own benefit and that of the rest of the software ecosystem. To start, you don’t have to wait to adopt some form of 2FA, which typically uses a combination of a password with a security token or biometric feature like a fingerprint or face scan. 2FA isn’t perfect, but it is harder to compromise than a single password and it has proven effective at reducing credential compromises and other attacks.

Effective steps organizations can focus on include:

Software composition analysis.

SCA is an automated process of evaluating the security, license compliance and code quality of open-source software. With the increased use of cloud-native applications and DevOps/DevSecOps practices, trying to track open-source code manually is no longer practical. SCA’s automated analysis is quickly becoming essential.

Software Bill of Materials (SBOM).

SBOM is a machine-readable inventory of software components and dependencies, including information about those components and their hierarchical relationships. An SBOM can reduce risk, along with providing other benefits such as reducing costs and compliance risks.

SBOMs can also help in avoiding potentially harmful practices, such as auto-merging code from open-source repositories, and they allow you to be as discerning as possible when going between versions in open-source repos.

Passwordless Technology.

Apple, Google and Microsoft announced plans to build support for passwordless authentication across all of the platforms they control. It might be hard to imagine a world without passwords, but it already exists on billions of devices that users unlock with fingerprint or face verification, or the use of a device PIN, all of which are simpler and more secure than passwords or technologies such as one-time passcodes sent via SMS. Passwordless authentication can include physical security keys, specialized apps, emailed magic links and biometrics.

You might not think that passwords are your problem, but passwords are your problem; especially when a single password is the only thing standing between an attacker and your data. Encouraging 2FA for GitHub contributors undoubtedly is a positive step but forcing it should happen sooner rather than later.


You can read more about Two-Factor Authentication here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!

Is Hyperautomation a Realistic Goal for 2022

Is Hyperautomation a Realistic Goal for 2022


Hyperautomation is basically an extension of digital transformation (DX) with an increased focus on AI, machine learning, and fully automated processes. Hyperautomation create a framework where business functions can operate 24/7, but it also further reduces human intervention, which can translate into significant cost savings. For many organizations, the thought of using advanced technologies to automate processes is obviously attractive. However, the path to achieve this goal is rife with potential pitfalls.

A proper approach to hyperautomation is to construct a robust plan both at a macro and micro level. While the end goal of hyperautomation is to automate all business processes across the board using data-driven decision-makingthrough the use of AI, actual implementations should be conducted on a case-by-case basis – only when processes have successfully been implemented and allow for proper levels of scalability and flexibility.

Business leaders and architects must first conduct a high-level map of how their organization is expected to operate both now and into the future. This is required so that the necessary levels of elasticity can be built into hyperautomated processes. For those who expect to pivot their business significantly in the next few years, for example, they will want to be very cautious as to not lock automated systems or processes into today’s business process flows.

The risks of jumping into hyperautomation projects without properly vetting macro- and micro-level business opportunities is significant. If existing manual processes are not flexible or efficient, simply automating them using AI/machine learning can at best devalue any benefits that hyperautomation can deliver. In a worst-case scenario, it can hinder a business’s ability to grow or shift to more profitable business ventures.

Also understand that hyperautomation is a fully data-driven approach. Thus, the business must be prepared to collect, curate, and analyze very large and complex data sets. Skills must be required either in-house or externally – often requiring both.

Despite the potential, hyperautomation is probably not a realistic goal for most. While DX has come a long way, there are businesses still struggling to perfect the process of moving manual processes into a new digital world. While some have certainly been successful, they remain the minority. That said, IT leaders should not wait to start down the path of planning for hyperautomation. The process of building a macro- and micro-level road map can start today regardless of where they stand from a DX perspective. Then once DX has successfully been accomplished, the path toward hyperautomation becomes far less risky.


You can read more about Hyperautomation here.

Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.

Click here to connect with our experts!