by Teknita Team | Sep 6, 2022 | Uncategorized
1. Cloud Document Management Systems Enable Savings on Equipment Costs
The obvious benefit of cloud document management systems is the same as all other digital technologies – they take away the need for “physical” solutions.
Both, traditional and electronic in-house document management systems incur regular maintenance costs as well which are much higher than what cloud-based document management systems require. Using cloud management systems not only eliminates the initial acquisition cost of equipment but also costs related to their maintenance.
2. They Take Away Spatial Constraints
Space is often also a constraint for start-ups and small to medium scale businesses (SMBs). As funding is limited for these types of businesses and commercial property costs significant in both leasing and buying forms, cutting space requirements can go a long way in increasing savings as well. Traditional and in-house document management systems tend to take a lot of space. This space doesn’t only draw on the funds of the company but also results in cramped workspaces which can affect productivity.
3. They Offer Quicker Deployment
Most cloud document management systems are end-to-end turnkey type offerings. They only require the user to register the company, make the right payment, and create the relevant user profiles. Most of these services are offered through web apps which is why they don’t even require the users to install desktop apps.
4. They Offer Better Security
Cloud document management systems improve document security from both types of threats deliberate and natural. Risks posed by external agencies are countered with elaborate encryption algorithms and firewalls. Similarly, risks arising out of natural disasters are also managed with redundancy protocols where data is backed up regularly to prevent it from being lost forever.
5. They Provide Easy Scalability
Scalability can be a major concern for both start-ups and SMBs. The objective of a business is to grow but if growth comes with added costs that are greater than the growing profits, then further growth can stutter. There is a direct correlation to document management here. With cloud document management systems, the current package can be upgraded for a small increase in the rates. This is why cloud-based systems are known to offer better scalability than traditional or electronic in-house systems.
6. They Improve Productivity
This is possibly the biggest benefit of cloud document management systems. Since they help the business save money, they increase productivity – also improve productivity by saving time. This time is saved in various ways.
First, they help save time by improving accessibility. They allow documents to be accessed from any location and at any time. More importantly, they increase the speed of collaboration between employees by making workflows more efficient and result oriented. This means that the employees’ time is better utilized and projects are turned around faster.
7. They are Environment-Friendly
Cloud document management systems are also more environmentally friendly than conventional systems. This is made possible by the principle of crowdsourcing. Because the same equipment is shared by more than one client, economies of scale reduce the carbon footprint of the service provider and, effectively, the clients.
8. They Reduce IT Support Dependency
Cloud document management systems free up the IT support teams of their clients. Since third-party service providers maintain their own equipment, the in-house IT teams don’t need to get involved in things like software updates, hardware maintenance, network management, licencing requirements, user monitoring, and even backup creation.
This can either result in IT teams being shrunk to suit the reduced support requirement or direct improvement in the efficiency of the office equipment.
You can read more about Benefits of Cloud Document Management Systems here.
Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.
Click here to connect with our experts!
by Teknita Team | Aug 25, 2022 | Uncategorized
Custom List
The SharePoint Custom List exists forever, but there was a lot of changes over the years – the list became modern and easy to use. In addition, you can now set unique permissions on individual rows within a list, giving various contributors the ability to edit their own entries. Moreover, you can also format the list now, giving your knowledge base a modern look. At a minimum, you can easily create columns (metadata), categorizing the entries any way you want.
Pages with metadata
The idea behind the second option is that instead of info being stored in a row within a list, each entry gets its own SharePoint page. This, of course, gives you lots of flexibility in terms of content (text, images, videos, etc.), and you get far more real estate to store the information. You can utilize some additional features available within the Site Pages library (that is where all the SharePoint pages are stored) to spice up the knowledge base built with SharePoint pages:
Ability to create page template – you can standardize the look and feel of every wiki article
Ability to create custom metadata on the Pages document library and display it on the article itself
All the pages built on a site are searchable by the mighty SharePoint search, so you can use keyword search and metadata filtering if you opt for metadata.
Page with Collapsible Sections
The third option SharePoint recently got, is a cross between the two previous options. If you like the flexibility of a list with its ability to group information by question/answer, yet, like the capability of a page to store text, images, and other web parts, you might want to check out collapsible sections.
Viva Topics
Finally, SharePoint has an option to create a Knowledge Base that is based on AI as well as manual input. This is possible thanks to the newly released Viva Topics, a module within the Viva Platform. It is contextual option – the topics might appear during a Teams conversation, SharePoint Search, News Posts, etc.
You can read more about SharePoint here.
Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.
Click here to connect with our experts!
by Teknita Team | Aug 24, 2022 | Uncategorized
Choosing between Visual Studio Code and Visual Studio is not so simple. While Visual Studio Code is highly configurable, Visual Studio is highly complete. The choice may depend as much on your work style as on the language support and features you need.
Let’s take a look at the capabilities of these two development tools.
Visual Studio Code
Visual Studio Code is a lightweight but powerful source code editor that runs on your desktop and is available for Windows, macOS, and Linux. It comes with built-in support for JavaScript, TypeScript, and Node.js and has a rich ecosystem of extensions for other languages (such as C++, C#, Java, Python, PHP, and Go) and runtimes (such as .NET and Unity).
Aside from the whole idea of being lightweight and starting quickly, VS Code has IntelliSense code completion for variables, methods, and imported modules; graphical debugging; linting, multi-cursor editing, parameter hints, and other powerful editing features; snazzy code navigation and refactoring; and built-in source code control including Git support. Much of this was adapted from Visual Studio technology.
VS Code proper is built using the Electron shell, Node.js, TypeScript, and the Language Server protocol, and is updated on a monthly basis. The extensions are updated as often as needed. The richness of support varies across the different programming languages and their extensions, ranging from simple syntax highlighting and bracket matching to debugging and refactoring.
The code in the VS Code repository is open source under the MIT License. The VS Code product itself ships under a standard Microsoft product license, as it has a small percentage of Microsoft-specific customizations. It’s free despite the commercial license.
Visual Studio
Visual Studio (current version Visual Studio 2022, which is 64-bit) is Microsoft’s premier IDE for Windows and macOS. With Visual Studio, you can develop, analyze, debug, test, collaborate on, and deploy your software.
On Windows, Visual Studio 2022 has 17 workloads, which are consistent tool and component installation bundles for different development targets. Workloads are an important improvement to the Visual Studio installation process, because a full download and installation of Visual Studio 2022 can easily take hours and fill a disk, especially an SSD.
Visual Studio 2022 comes in three SKUs: Community (free, not supported for enterprise use), Professional ($1,199 first year/$799 renewal), and Enterprise ($5,999 first year/$2,569 renewal). The Enterprise Edition has features for architects, advanced debugging, and testing that the other two SKUs lack.
Visual Studio or Visual Studio Code
If your development style is test-driven, Visual Studio will work right out of the box. On the other hand, there are more than 15 test-driven development (TDD) extensions for VS Code supporting Node.js, Go, .NET, and PHP. Similarly, Visual Studio does a good job working with databases, especially Microsoft SQL Server and its relatives, but VS Code has lots of database extensions. Visual Studio has great refactoring support, but Visual Studio Code implements the basic refactoring operations for half a dozen languages.
There are a few clear-cut cases that favor one IDE over the other. For instance, if you are a software architect and you have access to Visual Studio Enterprise, you’ll want to use that for the architecture diagrams. If you need to collaborate with team members on development or debugging, then Visual Studio is the better choice. If you need to do serious code analysis or performance profiling, or debug from a snapshot, then Visual Studio Enterprise will help you.
VS Code tends to be popular in the data science community. Nevertheless, Visual Studio has a data science workload that offers many features.
Visual Studio doesn’t run on Linux; VS Code does. On the other hand, Visual Studio for Windows has a Linux/C++ workload and Azure support.
For daily bread-and-butter develop/test/debug cycles in the programming languages supported in both Visual Studio and VS Code, which tool you choose really does boil down to personal preference.
You can read more about Visual Studio and Visual Studio Code here.
Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.
Click here to connect with our experts!
by Teknita Team | Aug 22, 2022 | Process Automation
OLAP (online analytical processing) is software for performing multidimensional analysis at high speeds on large volumes of data from a data warehouse, data mart, or some other unified, centralized data store. High-speed analysis can be accomplished by extracting the relational data into a multidimensional format called an OLAP cube; by loading the data to be analyzed into memory; by storing the data in columnar order; and/or by using many CPUs in parallel (i.e., massively parallel processing, or MPP) to perform the analysis.
OLAP CUBE
The core of most OLAP systems, the OLAP cube is an array-based multidimensional database that makes it possible to process and analyze multiple data dimensions much more quickly and efficiently than a traditional relational database. Analysis can be performed quickly, without a lot of SQL JOINs and UNIONS. OLAP cubes revolutionized business intelligence (BI) systems. Before OLAP cubes, business analysts would submit queries at the end of the day and then go home, hoping to have answers the next day. After OLAP cubes, the data engineers would run the jobs to create cubes overnight, so that the analysts could run interactive queries against them in the morning.
The OLAP cube extends the single table with additional layers, each adding additional dimensions—usually the next level in the “concept hierarchy” of the dimension. For example, the top layer of the cube might organize sales by region; additional layers could be country, state/province, city and even specific store.
In theory, a cube can contain an infinite number of layers. (An OLAP cube representing more than three dimensions is sometimes called a hypercube.) And smaller cubes can exist within layers—for example, each store layer could contain cubes arranging sales by salesperson and product. In practice, data analysts will create OLAP cubes containing just the layers they need, for optimal analysis and performance.
OLAP cubes enable four basic types of multidimensional data analysis:
Drill-down
The drill-down operation converts less-detailed data into more-detailed data through one of two methods—moving down in the concept hierarchy or adding a new dimension to the cube. For example, if you view sales data for an organization’s calendar or fiscal quarter, you can drill-down to see sales for each month, moving down in the concept hierarchy of the “time” dimension.
Roll up
Roll up is the opposite of the drill-down function—it aggregates data on an OLAP cube by moving up in the concept hierarchy or by reducing the number of dimensions. For example, you could move up in the concept hierarchy of the “location” dimension by viewing each country’s data, rather than each city.
Slice and dice
The slice operation creates a sub-cube by selecting a single dimension from the main OLAP cube. For example, you can perform a slice by highlighting all data for the organization’s first fiscal or calendar quarter (time dimension).
The dice operation isolates a sub-cube by selecting several dimensions within the main OLAP cube. For example, you could perform a dice operation by highlighting all data by an organization’s calendar or fiscal quarters (time dimension) and within the U.S. and Canada (location dimension).
Pivot
The pivot function rotates the current cube view to display a new representation of the data—enabling dynamic multidimensional views of data. The OLAP pivot function is comparable to the pivot table feature in spreadsheet software, such as Microsoft Excel, but while pivot tables in Excel can be challenging, OLAP pivots are relatively easier to use (less expertise is required) and have a faster response time and query performance.
You can read more about OLAP here.
Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.
Click here to connect with our experts!
by Teknita Team | Aug 19, 2022 | Uncategorized
James Dixon described the data lake:
If you think of a data mart as a store of bottled water—cleansed and packaged and structured for easy consumption—the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.
A data lake is essentially a single data repository that holds all your data until it is ready for analysis, or possibly only the data that doesn’t fit into your data warehouse. Typically, a data lake stores data in its native file format, but the data may be transformed to another format to make analysis more efficient. The goal of having a data lake is to extract business or other analytic value from the data.
Data lakes can host binary data, such as images and video, unstructured data, such as PDF documents, and semi-structured data, such as CSV and JSON files, as well as structured data, typically from relational databases. Structured data is more useful for analysis, but semi-structured data can easily be imported into a structured form. Unstructured data can often be converted to structured data using intelligent automation.
Data lake vs data warehouse
The major differences between data lakes and data warehouses:
- Data sources: Typical sources of data for data lakes include log files, data from click-streams, social media posts, and data from internet connected devices. Data warehouses typically store data extracted from transactional databases, line-of-business applications, and operational databases for analysis.
- Schema strategy: The database schema for a data lakes is usually applied at analysis time, which is called schema-on-read. The database schema for enterprise data warehouses is usually designed prior to the creation of the data store and applied to the data as it is imported. This is called schema-on-write.
- Storage infrastructure: Data warehouses often have significant amounts of expensive RAM and SSD disks in order to provide query results quickly. Data lakes often use cheap spinning disks on clusters of commodity computers. Both data warehouses and data lakes use massively parallel processing (MPP) to speed up SQL queries.
- Raw vs curated data: The data in a data warehouse is supposed to be curated to the point where the data warehouse can be treated as the “single source of truth” for an organization. Data in a data lake may or may not be curated: data lakes typically start with raw data, which can later be filtered and transformed for analysis.
- Who uses it: Data warehouse users are usually business analysts. Data lake users are more often data scientists or data engineers, at least initially. Business analysts get access to the data once it has been curated.
- Type of analytics: Typical analysis for data warehouses includes business intelligence, batch reporting, and visualizations. For data lakes, typical analysis includes machine learning, predictive analytics, data discovery, and data profiling.
You can read more about Data Lake here.
Teknita has the expert resources to support all your technology initiatives.
We are always happy to hear from you.
Click here to connect with our experts!