Quantcast
Channel: Dynamics 365 Business Central/NAV User Group
Viewing all 11285 articles
Browse latest View live

Blog Post: BC 16 update is ready for Business Central – are you?

$
0
0
Microsoft rolling notifications that Microsoft is ready to update tenants to BC 16 (wave 1 2020) and are we ready? 1. Didn’t you get notifications? Setup notification recipients in the Admin Center How can I set that? Follow the below link https://docs.microsoft.com/en-us/dynamics365/business-central/dev-itpro/administration/tenant-admin-center-notifications Result should look like below 2. How can we check whether we are ready? Check the compatibility of PTEs with BC 16 How can we check? One way is: We already have 16 version Preview available in Admin center so we can create a sandbox environment with 16 preview version and check our extensions compatibility and also test the new functionality and PTE functionality. Another way is: create a container using below docker image which will give 16 version already. mcr.microsoft.com/businesscentral/sandbox:us 3. What we have to do after we are ready for update to BC 16? Microsoft notification email comes with lot of information. Mainly, header part contains important dates Example: Updates will automatically start: On or after 4/30/2020 (UTC) First day to apply update: 4/16/2020 (UTC) Last day to apply update: 6/15/2020 (UTC) Scheduled time: Between 12:00:00 AM and 11:59:59 PM (UTC) Bottom part contains versions: Example: Your service identifier (Tenant): M365B085983-sandbox Environment: Sandbox Version before update: 15.4.41023.41345 Version after update: 16.0.11240.11946 NOTE: Now the good news is: we have 60 days to select out update. Usually it was 30 days. Microsoft gave us 14 days to set the update date: From 04/16 to 04/29 If we did not set the update before 04/29 then Microsoft will update on or after 04/30 (may not be all on same day) How can I set update date? If we go to Admin center and the environment then we will see whether “update scheduling available” If yes then it will show update will start on or after date: in this case 04/30 as I didn’t set update date yet. On top, click on update and then schedule update Here, we can select the version and date that we want to update our environment I prefer to update Sandbox environments which is a copy of production first and test and then plan for Production later. On top, click on update and then schedule window We can also set update window: which means this is the time on the day the updates to be rolled out. Note: make sure that your environments are on 15.4 to update to BC 16. If your environment is below 15.4 then Microsoft will not schedule update to BC 16. Stay Home and Stay Safe and enjoy the update.

Blog Post: Dynamics 365 Business Central: loading Configuration Packages from AL (part 2)

$
0
0
More than one year ago I wrote this post on how to import directly from AL code a Configuration Package ( .rapidstart ) file. This code uses the ImportRapidStartPackageStream method declared in the “Config. Package – Import” codeunit as follows: I remember to have suggested in the past some possible improvements on this codeunit, like adding the possibility to import multiple files at once (maybe from a ZIP file) and to export a Configuration Package directly from code. By checking some days ago for the same functionality in Dynamics 365 Business Central version 16, with a bit of surprise I’ve discovered that: “Config. Package – Import” codeunit is always the same (no new methods or implementations added) A new method called ImportPackageXMLFromStream was added in “ Config. XML Exchange ” codeunit Using this second method is quite tricky, because if you directly load the .rapidstart package into an InStream object and use this new method, you will have an XML error like “A call to System.Xml.XmlDocument.Load failed with this message: Root element is missing “. The .rapidstart file is compressed, so you need to decompress it before calling the new method. The code that works by using the ImportPackageXMLFromStream in “ Config. XML Exchange “ codeunit is as follows: Please don’t ask me why Microsoft added this new method here too However, there’s still an open problem: there’s no a native method for exporting a Configuration Package from a SaaS environment directly from code. Codeunit “ Config. XML Exchange ” has the following methods: ExportPackage ExportPackageXML ExportPackageXMLDocument but they’re all for on-premise usage (cannot be used on SaaS): I think that adding a new ExportPackageXMLFromStream method could be useful on many scenarios (ticket opened here ), exactly like avoiding different methods on different codeunits that does quite the same things

Forum Post: Debugging BC on-premise with AAD

$
0
0
Hi guys, Hope one of you've tried something like this. I have an on-premise installation of BC 2019 fall-release (15.1) with AAD authentication. I have also created a launch.json attach entry, where I have setup server url, authentication = AAD etc. When I select "Debug with publishing" it opens and gives me this error: Sign in Sorry, but we’re having trouble signing you in. AADSTS90002: Tenant 'default' not found. This may happen if there are no active subscriptions for the tenant. Check to make sure you have the correct tenant ID. Check with your subscription administrator. It's nothing like our regular AAD login and the error looks more like it's actually trying to create a CLOUD login...

Forum Post: Job Queue Error Messages in NAV 2018

$
0
0
Hi Team, I wrote some CAL code for updating my approval entry table for multiple users approvals for documents in a workflow. ImprestHeader.CALCFIELDS("Pending Approvals"); IF (ImprestHeader."Pending Approvals" = 1) AND (ImprestHeader."Request Status" = ImprestHeader."Request Status"::"Pending Approval") THEN BEGIN Approval.RESET; Approval.SETRANGE("Table ID",52121500); Approval.SETRANGE("Document No.",ImprestHeader."No."); IF Approval.FINDFIRST THEN BEGIN REPEAT IF (Approval."Sequence No." = 2) AND (Approval.Status = Approval.Status::Open) THEN Approval.VALIDATE(Status,Approval.Status::Canceled); Approval.MODIFY; COMMIT; UNTIL Approval.NEXT = 0; END; END; I also follow this link: https://www.myerrorsandmysolutions.com/you-do-not-have-the-following-permissions-on-tabledata/ By the way it was working fine in the test server but with error messages in the live server with job queue reports: The follow SQL error was unexpexted: Time-out occurred while waiting for buffer latch type 2 for page (1:266641582), database ID 5. The following SQL error was unexpected: this SqlTransaction has completed, it is not longer usable You do not have permission on Approval Entry Table: Modify.( the users are Full License as License type, Super as permission set in the user card).5 What can I do to solve these error messages. Thanks

Forum Post: RE: Job Queue Error Messages in NAV 2018

$
0
0
Solved. Client license didn't permit me to modify approval entry table with a report except codeunit. Thank you.

Forum Post: RE: Account Schedule Weekly Column Layout

$
0
0
Hi Coltonk1, I hope the below layout work for you. Navigate to Departments/Financial Management/General Ledger/Analysis & Reporting/Accounts Schedule. Click "Edit Column layout Setup" in the ribbon. Configure as below, You will get the weekly report based on Week. Column No. Column Header Column Type Ledger Entry Type Amount Type Comparison Date Formula 1 Week 1 Net Change Entries Net Amount -3W 2 Week 2 Net Change Entries Net Amount -2W 3 Week 3 Net Change Entries Net Amount -1W 4 Week 4 Net Change Entries Net Amount CW Regards, Genie_Cetas

Forum Post: RE: Debugging BC on-premise with AAD

$
0
0
In the meantime I have been in contact with Microsoft, where the message is that the AAD option in the launch.json file is only supported for cloud deployments. So please support this at BC Ideas:

Forum Post: Docker has status "Exited" - How to start again ?

$
0
0
Hello, I'm working on BC Docker Containers since more than 1 year, now my container has status "Excited" (I'm on Windows 10 with Docker Desktop 2.1.0.4 installed) If I do a Restart on Docker Desktop the status stays on Exited How can I get Status Running again ? Thanks

Forum Post: Posting Sales Journal through CodeUnit

$
0
0
I want to know how Sales Journal Line can be posted using C/AL Code. Any help would be highly acknowledged.

Forum Post: Posting Sales Journal

$
0
0
When we post a sales jounal line using c/al code, will all these tables be modified?

Forum Post: Inserting Data through XMLPORT in Customer Ledger Entries

$
0
0
Can we directly insert data in Cust. Ledger Entry table through xmlport without posting journal lines?

Forum Post: RE: Docker has status "Exited" - How to start again ?

$
0
0
Have you tried to create the container again?

Blog Post: Dynamics 365 Business Central: using AL rulesets to customize code analysis

$
0
0
I think that many of you are familiar with this topic, but I’ve received today a question related to AL code analysis and I think that also a response here can be helpful. Every Dynamics 365 Business Central developer now knows for sure that in AL you can activate code analyzers for inspecting your code. You can have more info here and here . AL Code analyzers are the following: CodeCop  is an analyzer that enforces the official AL Coding Guidelines. PerTenantExtensionCop  is an analyzer that enforces rules that must be respected by extensions meant to be installed for individual tenants. AppSourceCop  is an analyzer that enforces rules that must be respected by extensions meant to be published to Microsoft AppSource. UICop  is an analyzer that enforces rules that must be respected by extensions meant to customize the Web Client.  Every company has its own practices for inspecting its codebase. There are companies that never uses code analyzers, there are companies that activate some of them on every compilation and there are companies that activate always all the code analyzers on every project. What’s the best way of work? Someone can answer by saying “activate all analyzers to have a 100% code clean” but there are practical situations where this is not possible or extremely time-consuming to be activated immediately. A tipical example that I see on many partners is the following: they convert a big C/AL solution to AL by using the txt2AL tool. Some objects are refactored in AL but some objects are a pure conversion from C/AL. The txt2AL tol during code conversion creates a full working AL code but that does not respect the formal standard of AL (for example, code conversion creates FINDSET instead of FindSet() and this is a warning for CodeCop ). What happens now? The partners activates all the AL code analyzers and from 0 errors the solution goes to hundreds of errors. In these situations, I think that can be useful to know that you can customize the code rules that comes in the standard code analyzers by creating custom rulesets. As an example, consider the extension that I’ve used in my previous post . I have 0 errors and the extension is fully working. But if I activate for example the CodeCop analyzer, this is what happens: A number of warning appears. This is because my code does not respect all the official AL code style guidelines. What can you do now? You can essentyally do the following: Deactivate the CodeCop analyzer or ignore all the warnings Fix some warnings and ignore the others Create your custom rules Point 3 is what we want to do now We don’t want to ignore the AL coding guidelines (because we want a very clean code) but some rules are too crazy for us (and believe me that some of them are crazy) and so we want to create our custom rulesets. A ruleset can be created by adding a file called .ruleset.json (where name is the name of the ruleset) and using the truleset snippet. Here as an example I want to create some custom rules for my project where (see the first image) rule AA0074 and AA0214 are ignored, while AA0008 for me has a severity of error instead of warning. The ruleset can be defined as follows (use the trule snipper to create a rule): After creating the ruleset, you need to enable it in the settings by adding the al.ruleSetPath clause as follows (points to the location of your ruleset file): What happens now to your AL extension? Magically, error rules are changed and in my example now I have 1 error and 2 warnings: I think that having custom rulesets for company is really useful because you can guide your developers on how they should write code. Here we have created a ruleset for a specific project, but in a ruleset you can also integrate a company wide ruleset to custom rules per project. To do so, just add the following declaration to your ruleset file and point to the company ruleset: Rulsets can help you on writing code clean without become too much crazy, so remember that they exists

Blog Post: Read-scale out with Azure SQL and Dynamics 365 Business Central

$
0
0
In the last month I had the chance to test the read-scale out feature of an Azure SQL Database with Dynamics 365 Business Central. Others have talked about this opportunity in the past and I want to share here my experience. The Read Scale-Out feature permits you to have a read-only replica of your Azure SQL database and use that replica to perform read-only queries instead of using the main instance and so without affecting its performance. This feature is enabled by default on Premium , Business Critical and Hyperscale service tiers. Read Scale-Out cannot be enabled in Basic , Standard , or General Purpose service tiers. Read Scale-Out is automatically disabled on Hyperscale databases configured with 0 replicas. When you create an Azure SQL database with Premium tier, you can see that read-scale out is selected by default: You can disable this feature also after db creation by selecting the database and then going to the Configure section: By using Azure Powershell you can check if Read-Scale Out is enabled for a database with the following command: Connect-AzAccount Get-AzSqlDatabase -ResourceGroupName YourResourceGroup -ServerName YourServerName -DatabaseName YourDatabaseName| Format-List DatabaseName, Edition, ReadScale, ZoneRedundant; As you can see, I have it enabled on my database: You can also use Azure Powershell for enabling or disabling Read-Scale Out feature: Set-AzSqlDatabase -ResourceGroupName YourResourceGroup -ServerName YourServerName -DatabaseName YourDatabaseName -ReadScale Disabled To use the read-only replica of your database, you need to connect to that database with the ApplicationIntent  property setted as Readonly (just append ApplicationIntent=ReadOnly to your connection string). If you’re using SQL Management Studio (SSMS) this can be done in the Additional Connection Parameters tab as follows: When clicking on Connect , you are redirected to the read-only replica and you can perform here your queries. What happens if you try to perform a write operation? Error! you’re on a read-only replica: You can verify that you’re connected to a read-only replica by executing the following T-SQL command: SELECT DATABASEPROPERTYEX(DB_NAME(), 'Updateability') as [Updateability]; and this is the output of this query: If you insert a new data or update existing data from the primary connection, the read-only replica is updated (after a small delay). Why this feature is extremely interesting for Azure SQL in general but for Dynamics 365 Business Central in particular? Because if you have lots of read-only workloads (like reports or datawarehouses ETL activities that pulls data) you can have a big performance benefit on your main instance and all without extra costs. Your primary database node will not be affected on long running read-only queries or processes because they are not sent to the primary node anymore and your production workloads (writing processes) can have a benefit. How can you use this feature with Dynamics 365 Business Central? Starting from Dynamics 365 Business Central 16, reports, queries and API pages have a new property called DataAccessIntent . This new property permits you to set whether to get data required by this object from a read-only replica of the database or from the primary database: The ReadOnly value acts as a hint for the server to route the connection to a secondary (read-only) replica, if available. When a workload is executed against the replica, insert/delete/modify operations aren’t possible and if any of these operations are executed against the replica, an exception is thrown at runtime. In Dynamics 365 Business Central SaaS, read scale-out feature is automatically enabled by default , so it’s highly recommended to use the DataAccesIntent property on your read-only queries. In on-premise scenarios (Azure SQL or SQL Server) you need to enable this feature by your own. Read-only routing on SQL Server is available only from version 2016 and above. For on-prenise scenario, you need also to enable the Enable SQL Read-Only Replica Support  ( EnableSqlReadOnlyReplicaSupport ) setting in the service tier configuration. You can control the DataAccessIntent property for your objects also directly via user interface. For doing that, search for the Database Access Intent List page: Here you can set the following values: Default : the object uses the predefined access intent. Allow Write : the object uses the primary database, allowing the user to modify data. Read Only : the object uses the database read-only replica, which means that the user can only view data (no insert/update/delete). What’s my experience so far? For large read-only reports or queries or external ETL processes that needs to query data from your Dynamics 365 Business Central database, this is absolutely recommended (you will gain in performances and on CPU loads). If you have external application that uses direct SQL access to your on-premise database for read-only tasks, I suggest to change the database connection string by appending ApplicationIntent=ReadOnly . If you have API pages that exposes entities only for reading data (GET), I suggest to use this property on the API page definition. The only “problem” I’ve seen so far (if it’s a problem for your scenario) is that data from the primary instance is not immediately replicated to the read-only replica but there’s a small delay. If you have real-time reporting that could be affected of this, you need to remember this aspect. I think this is extremely interesting for statistics, analytic reports, periodic reports that analyzes data by periods and so on and I suggest to give it a chance.

Blog Post: Business Central Spring 2019 Update (BC14) CU12 TearDown

$
0
0
Cumulative Update 12 for Microsoft Dynamics 365 Business Central April'19 on-premises (Application Build 14.13.42648, Platform Build 14.13.42627) This cumulative update is rather small one, but it has some quirks I decided to blog about. First some warnings If you have too new version of Report Viewer, you will receive error: To upgrade reports, you must have Microsoft SQL Server 2016 Report builder installed. I already had this installed, so I uninstalled, downloaded a fresh copy from https://www.microsoft.com/en-us/download/details.aspx?id=53613 and reinstalled it. This did not help, so I installed the report builder3 from BC14 install cd, and it did the trick and I was able to continue with the update. Then some statistics. The ChangeLog text file is only 285 kb heavy, and it contains total 80 changed objects marked with “NAVW114.13”, and no new ones when we compare the import sheet to previous CU. Most of the changes are located on pages and tables. MS has continued their standard way of releasing CU’s so that this release has translations for the new features that were released in previous CU. This of course helps a bit when there is no English captions, descriptions and tooltips happily mixed with localized texts. New fields Tables 21 Cust. Ledger entry , t112 Sales Invoice Header , t274 Bank Acc. Reconciliation Line , t1208 Direct Debit Collection Entry , t1248 Ledger Entry Matching Buffer , t5992 Service Invoice Header now have a new field Payment Reference [Code:50] . This is used to keep track on European economy area payments. These fields are used in various codeunits where payment matching is processed. This probably means that Finnish customization of field 32000000 Reference No. [Code:20] is going to get obsolete soon enough. Good riddance! New functions Pages 306 Report Selection – Sales , p524 Report Selection – Reminder , p5932 Report Selection – Service now have new function to properly initialize report selections for Reports. The new function name is InitUsageFilter. Afterwords Finally, due to COVID-19 pandemic, MIBUSO NAV Techdays https://navtechdays.com is rescheduled to 2021. Bummer! I was already looking forward to see all fellow NAV/BC tech enthusiasists in Antwerp, but of course we have to go security and health first. Stay safe and healthy! //Urpo

Forum Post: RE: Debugging

$
0
0
I also faced the same - Just cross check launch.json file and then manually see the URL is working and then re publish and try. I did the same and fortunately it worked.

Forum Post: RE: The remote certificate is invalid according to the validation procedure

$
0
0
There certificate might have expired which C/al code want to execute.

Blog Post: Why to run standard tests?

$
0
0
Sunday morning, sun softly touching me while I am sitting behind my laptop at the other end of our kitchen table. Time to write the post that has been on my mind for quite a while. A topic partly born out of amazement. An amazement triggered every once in a while when hearing fellow professionals saying that it does not make sense to run the standard tests [1] on your solution. Our daily practice shows different for already more than three years. That's what I would like to share with you below. Why we run standard tests The MS test set, provided as part of the product, is a humongous collateral of tests. When it first became part of the product in NAV 2016 these were 16,000+ tests and the number has been growing ever since with each minor and major release, altogether being more 22,000 tests nowadays. This collateral covers all standard application areas, like G/L, Sales, and Purchase. Now, the previous description implicitly holds two major reasons why we started to adopt the standard test as part of our test automation: our solution extends a major part of the standard application, specifically being the areas as mentioned above, so almost unavoidably, running standard tests that cover these areas, will hit our code these potentially thousands of tests hitting our code are for free saving us a near to unimaginable amount of time [2] As saying goes the proof is in eating the pudding , thus we first ran the standard tests on our code exactly as I described in this post . Seeing the result of this first run my first thought was one of dismay as only a good 22 % of the tests ended successful. On second thought, however, this showed to be a great result as it meant that the gross amount of MS tests, i.e. 77 %, did hit (some of) our code so these are, to say the least, tests that do matter to our solution. Getting these working would mean we would have a nice number of tests validating (part of) our code. Overall relative result of our first test run, showing in orange the approx. 77 % failures. What did we do First of all we dreamed that we could get these tests running on our code in a couple of weeks: walk like Little Thumb with the Ogre's seven-league boots as we say. Secondly we found a way to make big jumps with as little as effort to get them working. It's what I call a statistical approach : load the test results in Excel, using a number of pivot tables list the most occurring errors, analyze their cause and find a simple, generic solution. It turned out that in approx. 90 % of the failures the fix was to update the data in the database under test. Meaning that we just needed to provide some extra shared fixture . In this post I talked about fixture , and more specifically shared fixture, and how we got that extended for the standard tests by hooking into the MS Initialize functions. After a first full week of work by one FTE the total success rate was lifted to 72 % (from the already mentioned good 22 %), a second week raised this to almost 80 %, where, after an additional 4 weeks, eventually the meter stopped at 90 %. An effort of 6 weeks of work by one person (spread over a couple of months) yielded approx. 12,500 tests that we could add to our test collateral. Overall relative result of our test run after 6 weeks of work , showing in blue the approx. 90 % passing test. What did we gain 6 Weeks of 1 FTE work for 12,500 tests might sound impressive given the time it would take to get 12,500 tests created ourselves. But the sheer fact of a great number of tests should not be a goal on its own. The goal is to get a useful collection of tests that can support us in our daily work, showing that, by rerunning them on a regular base, our existing functionality is still doing what we expect it to do. So what did we gain getting that massive number of tests at our disposal? Our statistical approach intrinsically did disregard any detailed knowledge of the standard tests. Knowing these did cover the various standard application areas, we assumed these would be meaningful to us. Seeing the vast part of tests fail this was a first indication our assumption made sense. Having used these tests for our nightly test run in the last 3 years, we only can but confirm these 12,500 tests [3] have been of great value. It has saved us many times from after go-live bugs, being already detected by our automated tests, not in the least due to its greater reach than manual testing. It also brought our development focus, including manual testing, to a next level as we can trust on the nightly test run. Parallel to his move we did leverage our development practice with the introduction of Azure DevOps pipelines to automate builds and deployments. I guess it has never been a secret that I am a huge fan of this. All these efforts brought us very close to continuous delivery . It's actually more than very close as we could update or production environment any time if needed. For practical reasons we normally update in the weekend. Being one of the first end-user companies to go live at the start of 2018 on NAV 2018, coming from NAV 2016, was highly dependent on our test collateral. We couldn't have achieved this in this short while without. It again showed its value on the technical upgrade to BC 14 we have been investigating recently where the lead time showed significantly lower on BC 14 than on NAV 2018. [4] Test run lead time registered for a number of runs on NAV 2018 (blue/grey) and BC 14 platform (orange/yellow). Conclusion Without a doubt, incorporating standard tests in our daily work has tremendously helped us: improve the quality of our solution leverage our development practice grow the development throughput I reckon you can (now) understand the amazement I uttered above. Unless your solution is fully independent of the standard, I honestly do not get why you should not make use of standard MS tests. You're selling yourself short by not doing this at all. Even in the context of a simple, small extension, like the LookupValue business case I used in my book , its code is being touched by more than 3,000 tests on BC 14! BTW: learn more about how to get MS tests working on your solution from my book and by joining my online crash course Test Automation, of which a 5th run will start on May 25 (still some seats available). Footnotes [1] With standard tests I refer to the tests provided by MS as part of their product. [2] A simple calculus I display during my workshops is the following: let's say, having achieved a certain degree of experience on writing automated test, you would be able to create one automated test in 10 minutes (which is not that unimaginable), so 6 per hour and 7 hours per day. Getting to 20,000 tests would take you, as one person, almost 3 years to get that done. [3] These 12,500 on NAV 2016 has grown to almost 14,000 test on NAV 2018. [4] While working on this project all of sudden the lead time did increase dramatically, only to recover about 2 weeks when we finally found out that a Windows Server update that was rolled out just before the lead time increase did contain a bug. Once fixed the lead time was back to "normal". See how the graph did notify us. Test run lead time increase due to a bug in Windows Server update. Note that the increase was more dramatic on BC 14 (orange/yellow) than on NAV 2018 (blue/grey). We never found out what caused the difference.

Comment on Case Study - Create a Webservice in Dynamics NAV to print an invoice from a website

$
0
0
Thanks for this helpful piece, but I want help adapting it for nav 2016. I am not getting the desired from what I have tried soo far.

Blog Post: How to Avoid a Mutually Assured Destruction While Implementing a New ERP System

$
0
0
Overview While working with companies on ERP implementation (in our case, Microsoft Dynamics 365 Business Central), what I tend to notice more often than not is that companies know what they want to get out of ERPs, but not how to do it. The result is that managers create their own chaos that makes people go in different directions. It also means that internal managers often sabotage their own ERP implementations. This article is part of a series regarding how internal managers can implement better new ERP systems. All roads lead to Rome; but some of them are more steady than the others. Common System for a Common Goal It’s not rocket science: when companies decide to switch to a new ERP/accounting software to address specific problems they’re facing, it’s for a specific reason. Usually, it’s because they have reached the limits of their current software, or it would be too expansive to do otherwise. While the new ERP offers a positive cost-benefit outcome, managers need to come up with common goals when implementing the new system to ensure a successful implementation if they want to avoid a mutually assured destruction After companies shop and find the perfect system for them, the steps that follow typically are planning, configuring and then implementing the software. One modest but extremely important goal, in most cases, is to just go live with the system. Even though this is a simple and vague goal, it allows parties involved to be working on the same objective, which is to go from the old, legacy software to the new one. It also means transfering crucial information to the new system the way that fits your company’s culture and ways of working. Here are the do’s and don’t when it comes to implementing a new ERP system. Too Much Potential is like Too Little There’s one thing managers need to keep in mind while implementing a new ERP system: time is money. The potential offered by the new software may appear limitless; but while this perception of infinite improvement may prenail for a while, having too high expectations from this change could have the very opposite effect. This is why a strategy is necessary. A new ERP could help your company with (but is not limited to): Automation Better information for customers Accuracy Process re-engineering Better analysis/reporting for management While the ultimate goal of going to a new software is to make everything better, too much of a good thing may turn out to be bad. Think about it: more often than not, vendors promise the moon to their clients. When a manager watches the software’s demo, he/she will think the possibilities are infinite. But often, the honeymoon phase doesn’t last. But keep in mind that these people are salespeople, and your company needs to have a down-to-earth, strategic approach on how to implement the software. Make sure you know from the beginning what your company wants to get from the system, so you know what to prioritize. This is when the software genius comes in, and the learning curve starts to go up. Shortly after the product is acquired, the manager will look at it, learn about the new features to figure out how they can benefit the company. Suddenly, so many problems the company had with the new system are going to be solved. Suddenly, a whole new world is going to be opened to your company. Then, an infinite list of ideas will come out of the manager’s expectation of this new system. One thing to keep in mind when implementing a new ERP is that when working with developers, they are likely to say yes to most of your requests. Developers tend to see every request from a customer as a challenge, without necessarily thinking about budgets. The result: developers overpromise and managers run wild. To avoid this situation, having a specific end goal in mind, with specific areas to improve is the solution. Most people do cost-benefit analysis based on the cost and the improvements that it would bring. However, the time to get it up and running is often overlooked. When you start a bunch of projects, especially during a period of high stress such as moving to a new system, you end up stretching your resources very thin. Too often, resources are stretched indefinitely and the end goal is so evasive that even the manager tends to forget it. This is why successful ERP implementations begin with specific, realistic goals that fit your company’s culture and needs, and will save you time and money. Conclusion I’ve worked with many companies in the process of implementing new software. While I have a complete confidence that they improve the companies’ efficiency and in the long-run make everybody’s life better, my experience has shown that not having a full-length implementation strategy can cost time and energy; and since time is money, investing in time-saving strategies is the way to go. It’s simple: focus on the main task at hand. Go one step at the time. While it may take slightly more time, the end result will be less confusion and better results. The last thing you want, when implementing a new system, is to yield fear and uncertainty. Fear and uncertainty lead to failure. Strategy, coherence, and preparedness lead to success.
Viewing all 11285 articles
Browse latest View live