July 20, 2012
We often hear from business executives about their concerns with duplicate systems and duplicate data sets. We also hear the desire to create or improve “a grand all-encompassing data warehouse” so executives can better understand customer trends as well as all sorts of business informatics. Mostly, there is a desire for one easy to use consolidated database – the one single database of truth for all customers, contacts, and related information AND accessible across all devices - phone, tablet, laptop.
- Understand why data consolidation is important
- Recognize reasons why data silos occur
- Catalog all the data sets
- Form a strategic plan on how CRM will be leveraged
STEP 1 - Understand Why Data Consolidation is Important
The first step is to understand the value of good customer data. Many executives would already agree that consolidating customer data is critical in business today but there are some that are hesitant to invest in the endeavor or they are unsure where to start. There are many valuable advantages to data consolidation, but in a nut shell, businesses are more effective, more competitive, and have greater insight when they can see one shared customer record that has a deep 360 degree view. This view could include all customer activities, purchases, visits to your website and where they went, accounting information from the ERP, real time data from data aggregators, customer complaints and their resolutions, and yes, the customer’s correct email. That is at the record level. Good data then rolls up for quality analytics. Reporting on your key performance indicators and customer buying trends and even predictive buying just got a whole lot easier now that everything is in one place. Keep in mind, in your current state, you probably have most of this data already, but it is just spread out in too many duplicate databases and Excel files to be usable.
The next step is to try to stop data silo creation. There is usually a central system in place but then users remove the data they want to an Excel file and continue to update that Excel file outside the central system. Why? Why are users compelled to create Access databases and Excel files? There can be many reasons but usually it is because the central system is hard to use and it is just easier using Excel or their own Outlook. So how do you stop users from creating their own data sets? Improve the user interface is one way and that is where Microsoft Dynamics CRM can help. Although Microsoft CRM uses a powerful Microsoft SQL Server database to handle Big Data, it is the user Interface of Microsoft CRM that will often help reduce data silo creation. Microsoft CRM has the potential to be designed putting the user first – meaning the interface is clean and purpose driven. It also helps that Microsoft Dynamics CRM can be accessed from within the very tools that frequently drive silo creation to begin with – in this case Excel and Outlook. All of CRM can be accessed from Outlook and there is one click in CRM to bring data to Excel.
STEP 3 - Identify and Catalog all the Data Sets
Next, start to identify all your data. Data you have and even the data you don’t have. Catalog all the In-house data like main central systems, the many rogue Access databases and Excel files, as well as website databases and SharePoint Lists and so on. Just focus on the data that is being managed and updated away from a central database. Think also about the data you may not have now but could get from an online subscription to a data aggregator such as InsideView to sync real time business and social data or Trillium to validate and clean your data for total data quality.
The last and most important step is to form a long term strategy plan. This is where some companies may need assistance. Without going into too much detail, the plan should attempt to include what data will be migrated and what will be integrated.
· Migration: With data migration, this is the data that will be completely moved permanently to live in CRM. These aren’t just the rogue Excel files; this could potentially be retiring some completely separate business applications – meaning move the data along with the functionality and business process to CRM using the innovative xRM approach to duplicate functionality where logical.
· Integration: What data then gets integrated in and out of CRM? Is the integration one way (asynchronous) or two way (synchronous)? What are the outside data sources and services that will be utilized? What data integration tools such as Scribe may be needed?
The strategic plan may also include new reporting needs, how data will be displayed across devices, and what users get to access and not access based on their security. As you can see, having a well thought out plan divided into logical phases will provide the long term blue print and framework for continued data success.
If you understand the value of one consolidated database and you know why users make data silos and you have cataloged all your data sets and then formed and documented a well thought out plan that leverages Microsoft Dynamics CRM, well then, you should be on your way to improved data consolidation.
May 29, 2012
In many source systems (such as salesforce.com), the data from multi-select pick-lists is stored in a memo field as a long list of semicolon delimitated values:
ID Value 1 Option1;Option2;Option4 2 Option1;Option3
When upgrading to Microsoft Dynamics CRM, you may choose to convert those individual fields into N:N (many to many) relationships, but first you’ll need to de-concatenate those values into multiple rows along with the id of the source record.
ID Value 1 Option1 1 Option2 1 Option4 2 Option1 2 Option3
The following example code demonstrates converting a semicolon delimited text field on the opportunity record into a list that is then matched to a related entity and any missing many-to-many relationships are identified for creation.
May 22, 2012
Frequently when importing / scrubbing data within CRM, identifying duplicates to remove or ‘flag’ may be needed. It’s helpful to have a query that will return all the duplicates in the database – but exclude a single ‘surviving’ record. (i.e. Return all-but-one of the duplicated records.)
This query is one of my go-to query patterns – I rarely, if ever, use it as-is, but will modify it to fit the particular entity / scenario I need to investigate.
The returned record set might used as the source for a Scribe Insight job to either ‘flag’ or inactivate or delete the duplicates depending on the situation. – Of course you’ll need to take into account the record’s relationship to other entities etc.
April 16, 2012
If you are planning to integrate Microsoft Dynamics CRM 2011 with another business application, there are several items which you will want to have in place prior to go-live to ensure the success of the integration. A good partner like Customer Effective will do their best to make the integration run error-free; however, any integration needs periodic care and feeding to maintain good performance.
The following list of items was written with Scribe Insight in mind, but most would apply to any integration tool, such as SQL Server Integration Services (SSIS)
January 17, 2012
Customer Effective customer, Sabert Corporation, is using the Scribe Data Integration Platform to connect its SAP ERP system and Microsoft CRM. Sabert is a leader in the food packing industry and has been increasing sales productivity dramatically over the last three years with Microsoft CRM and is now seeing increased efficiency in its reporting as a result of the integration.
Every day, Sabert pushes all of the customer data from SAP to CRM using Scribe. Sabert’s 70 field sales representatives report that they are saving hours a night per person on clerical tasks now that they can get the information they need — such as order status reports — from CRM through Outlook on their laptops.
Mike Freeman, Director of Information Technology at Sabert knew they needed a partner like Customer Effective to make their CRM project a success; "When we were evaluating CRM solution providers for sales force automation, Microsoft CRM had what we needed: a native Outlook interface, which our field sales already used on their laptops, and the ability to work offline. We knew there would be heavy integration requirements, and we didn’t want to add staff, so we turned to a trusted partner and integration platform."
Read the entire news article on Sabert's success on our Customer Effective News page
December 15, 2011
The following is a quick way to redeploy Scribe ODBC data connections from one scribe server to another. This is particularly helpful when you want to move scribe and/or DTS from a test/development deployment to a production deployment.
1.) You need to click on start run and type “regedit” on the server from which you want to copy the ODBC connections.
Note: If you are not confortable working in the registry key ask for assistance. You can cause bad problems if you do not understand what you are doing.
2.) Find the ODBC Key for your type of installation. There are two location depending on OS verion 32 Bit. or 64 Bit.
32 bit server location
64 bit server location
3.) Export the registry key to a *.reg file.
4.) Logon to the server that you want to restore/copy the ODBC settings to and run regedit.
5.) Please backup this servers registry key prior to importing the exported registry keys from the other server. This way you can restore the original if something happens (Repeat step 2-3)
6.) Double click on the exported *.reg file from the server you are moving the ODBC settings from and it will import them.
December 13, 2011
When using Scribe, you frequently want your integration process to identify only the records that have changed since the last time the integration job ran, so you only do an incremental insert or update. There are several ways to do this, including message queues or application publishers. One of the simplest ways to do it is by using the GetLastRunDate function in Scribe. This function gets the last time a job has run from the executionlog table of the scribeinternal database, and you can use that variable as a filter in your source query, either from one of the Scribe adapters or from a SQL query.
Here is how you would filter a SQL query by last run date of the job in Scribe Insight:
1. Get the file path for the scribe dts. Easiest way to do this is navigate to the collaborations folder and shift + right click and select “copy as path.”
2. Create a calculated variable called LastRunDateTime.
3. Set the formula for the calculated variable to be
GETLASTRUNDATE( "C:\Program Files (x86)\Scribe\Collaborations\CustomerTest.dts" ) (replace the file path with the one copied in step 1).
4. Call that in your query as follows:
SELECT TASKID, FLOWID, STEPID, STARTTIME, ENDTIME, STATUS, LOCKBY, LOCKTIME, USERID, USERKEY, USERKEY2, PRIORITY, DEADLINEFLAG, DEADLINE,
AVAILABLEDATE, FILE_NO, FOLDER_NO, DESCRIPTION, DRAWER, PACKAGETYPE, FROM_USERID, DATE_INITIATED, LOCKPERIOD, FROM_FLOWSTEP, DOCTYPE,
FROM dbo.IR_TASK_DATA (nolock)
where ENDTIME >= :LastRunDateTime AND File_NO <> ' ' and File_NO is not null
ORDER BY ENDTIME
Considerations and shortcomings of this approach
- If you lose your scribeinternal database, your process will fall apart—this approach depends on the executionlog tables. Backup, backup, backup.
- The last run date approach can be problematic with related records, for example when accounts and contacts are created. If you have an account load running off of last run date and you have a contact load running based on last run date, you risk data loads failing if the account and contacts are created after the account load has run but the contact load has not run. In this case your contact will fail because the parent customer doesn’t exist. I recommend combining these into one step—getting all contact and parent account information in one query where the modified on the contact is greater than last run date or the modified on the account is greater than last run date—that way you won’t have any dependency/timing issues causing records to fail to load.
- The last run date approach can be problematic if rows fail. Since it is only getting records that have changed since the last time the job ran, if during a load a row fails for some reason, next time the job runs it will not retry that row. I recommend adding a “retry bit” to the source data table, and if for some reason a row fails, you can update the source for retry. Then, in your DTS, have your source query include records that were modified after the last run date OR records where retry = 1/yes. This is one way to eliminate the timing issues in the previous point—if a row fails because a dependency is not present, flag the row for retry and by the time the job runs again, the dependency will probably be there.
- Make sure that if you are reading from a MSCRM database using the lastrundate approach, you are reading from the filtered views, not the base views or tables. Keep in mind that the modified date field will be 4+ hours later in the base tables and views so if you are reading from the base tables and views your last run date logic will not work dependably.
October 26, 2011
I recently worked on integrating Microsoft Dynamics CRM 2011 Online with a local deployment of MS Great Plains 10. While you might assume that this might be an easy task considering that Scribe has had the GP templates around for a while now, it presented some challenges stemming from the security around CRM 2011 online hosting. The native Scribe GP templates need adjustments to connect to CRM 2011 online.
Scribe pointed us to the following blog post that will help you get started. We won’t go into all the details covered in the posting but here are the key points that you will need to do in order to make this work.
- You will need two CRM online accounts for this modified process which means customers have to pay for additional user licenses.
- One of the accounts needs to have the “isintegrationuser” set to 1. You need to create your own DTS to flip this value.
- You will need to modify all DTS connection that point to CRM 2011 to use one of the accounts you created above. There is one exception so please review blog posting.
- You will have to modify the “publishers” associated with CRM 2011
- The steps related to installing and configuring “Order History Monitors & Views for Microsoft Dynamics CRM” will not work because you cannot create an ODBC connection to the CRM DB located in the cloud.
The ability to use the GP templates and CRM 2011 online is mentioned in the release notes.
Lesson learned: always read the release notes for Scribe templates (and any Microsoft Dynamics CRM updates). They include important information about system requirements, and can save you from making time-wasting mistakes.
Hope it helps!
April 19, 2011
When companies using Microsoft Dynamics CRM 3.0 or 4.0 move to Microsoft Dynamics CRM 2011, there are some decisions to be made around how to move your current environment and data to CRM 2011. If you have an on premises installation, you can install CRM 2011 on premises and upgrade your MSCRM database to CRM 2011. This will bring in all of your customizations and data from CRM 4.0.
Upgrade or Migrate?
In some cases, a direct upgrade may not be practical, and you may want to selectively migrate data:
- If your existing data is not clean, you may want to start clean with 2011 and selectively migrate just the good data. As an example, consider if at one time you used contracts in CRM 4.0, but you changed your configuration to manage contracts in a different way—you may not want to bring in the legacy contract data if it no longer fits your current approach.
- If your environment is older than CRM 4.0, you may want to do a migration. if you are on CRM 3.0 or CRM 1.2, you cannot directly upgrade to CRM 2011—you have to do incremental upgrades. Say you have CRM 1.2, to upgrade you would need to upgrade to 3.0 on your 1.2 server, then install crm 3.0 on a Windows 2003 server with SQL 2005 and upgrade to CRM 4.0, then install CRM 2011 on a 64 Bit CRM Windows 2008/SQL 2008 environment and import and upgrade your 4.0 environment to 2011. The further away you are from 4.0, migration of the data becomes more practical then upgrading, given that there is potential risk of the process failing at each point of the upgrade.
- If you are moving from CRM on premises to CRM Online, you will need to migrate your data. There is currently no automatic upgrade process for Dynamics CRM on premises to CRM Online. You can import customizations from CRM on premises to CRM Online; however, moving data still requires a migration.
If you decide to do a data migration to CRM 2011, there are a couple of options to consider
1. Data import utility: CRM 2011 includes a data import utility, and it can take data exported from flat files and import them into CRM 2011. This tool is much improved from CRM 4.0, and it now handles larger data files and improved validation for data types, so you can more reliably import data. Using the tool you can easily import your accounts, contacts, opportunities, and other standard or custom entities.
There are some areas that are not accessible via the import utility. Some of these areas include:
- Activity attachments
- Certain parts of the product catalog
- Activity parties for e-mails with more than one recipient
- Notes with attachments
- If your data is mainly core entities like accounts and contacts, you should be able to use the import utility; however, if you have many activity parties or attachments, the import utility may not be able to completely migrate your legacy data.
- User mapping for ownerid and other user lookups can be problematic, especially if names are different in the new version, or if old users no longer exist.
- In some cases, you will want to update records after they are imported. For example, Accounts and Contacts can present a classic “chicken and egg” scenario. Contacts reference accounts, and accounts reference primary contacts. If you insert companies first, you won’t be able to populate the primary contact on the company record before the contact is created. You will need to insert the companies, insert the contacts, and then come back and update the company to set the primary contact. The import utility can update records if the GUID is in the first column of your csv file; however, this can require a bit of manual work to update the existing records.
2. Scribe Insight: Scribe is the leading vendor for CRM data migrations and integrations. Scribe provides an adapter to connect to CRM 2011 as well as earlier versions of Microsoft Dynamics CRM, and can easily load data into entities that cannot be loaded with the standard import utility.
We have migrated several of our clients as well as our internal environment from CRM 4.0 to CRM 2011 online. As part of these migrations, each had data in entities that could not be migrated using the standard import utility. For these migrations, we chose to use Scribe.
As referenced in my earlier post, Scribe includes an auto-map feature that can simplify the mapping of the data for your import. Just specify the legacy environment as your source, select the new environment as your target, and auto link by name.
There are a few things you won’t want to auto map:
- userid—chances are that your user id’s will be different in the target than they are in the source. I did a dblookup formula in Scribe to translate the fullname of the user in the legacy system to the userid of the user in the new system.
- Transactioncurrency—unless you have more than one currency, you can leave this field blank.
- AddressID—you will see for each address on accounts and contacts a field called addressid (Address1addressid and address2addressid). Do not auto map these.
- Statecode and statuscode—when I brought in my accounts, contacts, e-mails, appointments, etc, I did not map the statecode and statuscode fields. The reason is that if you set a record to be inactive, you cannot relate other records to it. There is a good chance that you will have activities, notes, contacts, or other records that are linked to inactive records. By leaving them active you can establish all relationships, then run a simple update dts at the end of the process to close out the inactive records
By doing this, I was able to quickly import the records from my on premises CRM environment to CRM Online.
There are a handful of limitations that I found through this process that Scribe and the import utility cannot import. These are mainly some deeper areas of CRM that are not exposed through the API.
- Quick campaigns are entity bulkoperation, and they cannot be imported by Scribe or the import utility. This also means that activities like email where the regardingobjectid is set to a bulkoperation record will also not be able to be imported
- Contracts have limitations around how they can be imported—they can only be imported in draft status, and you cannot associate other records, such as cases with them if they are not Active status and have an expiration date in the future. Manual intervention will be required to import cases and set them to a state that can be updated.
- When importing opportunities, the actualclosedate will be set to today’s date, even if you try to load another date to that field. The recommendation is after you update the status of the closed opportunities to Closed, run an update dts against opportunities updating the actualclosedate to the correct date.
- If you are reading from your legacy CRM database as a SQL or ODBC connection, be aware that if there are any ntext or long varchar fields, they must come last in your source query. If they don’t, the field will appear null and data will not come across for these fields. This is especially applicable for things like activity description fields. If your entity has more than one long varchar field, you may need to run subsequent update dts with the other long field listed last in the source query. An alternative approach is to install the new CRM 2011 adapter and point the old adapter to your legacy CRM system as your source, and point the 2011 adapter to the new CRM environment as your target. The crm adapter does not have the long varchar limitation.
In an effort to help others learn from my mistakes, here are some of my lessons learned after a couple of migration upgrades:
1. Determine which entities are being used. The easiest way to do this is to look at your SQL MSCRM database. View tables by number of records—this will help you determine where data is, and where it is not.
2. Determine the order—It is crucial that you import your data in the right order, so that data referenced in lookup fields is present when the record is imported. For example, you need to have your accounts in before you load your opportunities. As a general rule, have any accounts, contacts, opportunities, quotes, orders in before you load cases, and load activities last.
3. Don’t close cases or other records until all activities have been loaded, and don’t close activities until all activityparties and attachments have been loaded.
4. Activityparties are probably the most difficult thing to import. If you don’t know, activity parties are the people and companies associated with activities like e-mails in the to: field or the appointment in the requiredattendee field.
- If you have deleted any activities from your crm system, there is a good chance that some activityparties will be left behind.
- When you create an e-mail and track it in CRM, if one of the recipients is not in CRM as a contact, account, or user, it will create an activity party not linked to any partyid in CRM. These records cannot be imported.
- When you create an activity in CRM, activity parties are created for the sender, the recipient, but also the owner and the regarding of the activity. When you import the activities, these activityparties will automatically be created. If you then import the activity parties including these parties, some rows will fail telling you that they already exist.
- Activityparties cannot be easily deleted.
To avoid these headaches, set up your activityparty dts source query to inner join activityparty to activitypointer to filter out any activityparties linked to an activity that was deleted from the system. Also, filter out where activitypartyid is null and activitypartytypecodemaskname = “Regarding” or “owner.” This will cut the list down to just the legitimate activityparties, and save much time.
Remembering that activityparties can’t be deleted, if you have several hundred thousand activityparties in your database, you will want to break this process up into smaller chunks, maybe by year. That way you minimize the chance of the job failing, and having to re-load any data. You can also run them simultaneously and load the data faster.
Once the activityparty load was complete, I then ran a cleanup dts that joined the activityparties where partyid is null to contacts on addressused = emailaddress1. This successfully matched most of the unmatched activityparties, and I loaded setting partyid to contactid.
5. Users: If you have been using CRM for several years, there is a good chance that there are users in your system who have been deactivated, and that those users own records, such as activities. Keep in mind that to assign a record to a user, the user must be active. So if you have a former employee named Bob Smith, if you want to import activities and assign them to Bob, you will need to temporarily have his user record active in the new system. This means creating records for users who are no longer at the company.
For our migration, we created users for former employees. Given that CRM Online uses Windows Live ID for authentication, you might think this means that you have to set up legitimate Windows Live ID’s for each old user; however, this is not the case. You can create users using any made-up address, and save the user without sending an invitation to that address. You can then assign records to the user, and then disable the user record.
An alternative approach is to create a generic user called “former employee” and have Scribe assign any records owned by a former employee to this user. While this approach is faster, it may detract from visibility to who created the record.
6. Do not load directly to the activity entity. You have to load to the individual activity type entities—email, appointment, phonecall, or task.
Keeping these best practices in mind, you can quickly migrate your data from CRM 4.0 to CRM 2011 Online.
February 23, 2011
So you have a recently installed Scribe Version 220.127.116.1101 on a server. You’ve built some DTS jobs and everything looks to be importing and translating smoothly. THEN – one of your jobs throws up a report like this:
12,555 successful records loaded! 6 record rows failed. “Hmm? I wonder why those records failed?”, you ask yourself. - and so starts your exploration.
Enabling Rejected Rows in Scribe:
Before you can review the rejected rows, you must first tell Scribe that you want it to put the rejected records into a table. This is done by going to Run –> Edit Settings –> Rejected Source Rows (tab) –> Output Rejected Source Rows (checkbox). You can then tell Scribe where you want this “Reject Rows” table. Additionally, you can specify whether Scribe should create a new table for each job that generates “Reject Rows” – or if you want a new table generated. Generally speaking, you don't want a bunch of extra rejected rows tables for the same job, so I would recommend selecting the option to re-use the same table. I’ve defined it this way in the following screen shot from Scribe:
Querying Rejected Rows:
There can be any one of a 1,000 reasons why a particular row in a table fails to import. In my particular situation, the pick-list value that I had on my SELECTCASE statement turned out to be incorrectly numbered, which was throwing out the select 6 rows. But before I made that conclusion definitively, I wanted to see the data. For users of MS SQL Server 2005 or 2008, reviewing these rows is a fairly straight-forward process. Executing a query against a table in MS SQL Server is done through the Query Editor Window. One option is to Right-Click on the table and select Script Table As –>SELECT To –> New Query Editor Window, as depicted in the following:
Once you have the query set up appropriately in the Query Editor Window, simply select “Execute”. The rows of data can then be seen in the results tab below the Query Editor Window.
No Microsoft SQL Server Tools?:
My particular server where Scribe was located did not have MSSQL Server 2005 or 2008 tools installed – so I couldn’t run a simple SQL query against the REJECT_CONTACTMIGRATION table. Frustration set in. I didn’t have a free copy of Microsoft SQL Server 2005/2008 lying around. What can I do? Luckily, there was a solution.
As it turns out, Microsoft has a free product for just such a situation: Microsoft SQL Server Management Studio Express (MSSMSE). From the website (link): (Note, this link is for MSSMSE 2005. The 2008 version can be found here: link)
“Microsoft SQL Server Management Studio Express (SSMSE) is a free, easy-to-use graphical management tool for managing SQL Server 2005 Express Edition and SQL Server 2005 Express Edition with Advanced Services.”
Download the file appropriate for your install and MSSMSE will connect to the Scribe Internal DB automatically. You should see the following once you have installed the application and launch it:
In this case, it is connecting directly to the Scribe Internal database. Once connected, you can browse directly to the table you want to review:
Once you find the table you want to query, simply right-click on the table and select “New Query Editor Window”. This will populate the SQL Query Editor window with the appropriate SQL SELECT statement for the table you want. Execute the query and then you can view the rows in the table.
Once I identified the problem, I simply modified the DTS to include a filter for records with my reject record condition; re-ran the DTS; and my 6 lost records were loaded successfully! Victory!
Other Scribe topics can be viewed on Customer Effective’s Scribe Blog, located here.
Lastly, another topic on our forums enables you to create a workflow to surface Scribe errors and monitor them directly into CRM, which can be very helpful. This is located here.
Good luck with your CRM projects!
January 26, 2011
One of the criticisms that Scribe Insight sometimes gets is that it’s slow. While it is true that Scribe is not the speediest ETL tool out there, much of the blame for sluggishness can be attributed to DTS design.
One rule of thumb is that every trip to the source or target slows down the process. That means that dblookups and seek steps, while useful tools, cause the process to go slower. Also, updates and update/inserts hit the target 2-3 times, so updates are slower than inserts.
Here’s an example of what I’m talking about: I integrated a prospect database with Microsoft Dynamics CRM. This is going to be a couple of hundred records inserted or updated per day, but to get started, we first had to load the existing prospect data, roughly 2.5 million records.
I wrote the DTS for the on-going integration, in which I used an update/insert step, as they will be getting updated information for existing prospects, and we want the process to update if it exists, and insert if it doesn’t. This DTS also contained a seek step and two dblookups to populate lookup fields in CRM from other CRM entities.
When I tested the DTS, it imported 10 records every 3 seconds. While this performance would be satisfactory for the on-going integration volume, it would be too slow for the initial migration of legacy data. At that rate, it would take 8 days to load the legacy data.
Here’s what I did to cut the migration of the legacy data down to a reasonable amount of time:
1. I copied the legacy data to a table in the ScribeInternal database, and made ScribeInternal the source for my migration DTS. This made it a shorter trip to get the data.
2. I moved the lookups and seeks to the source query. Instead of doing a lookup to the account table in the DTS, I joined the filteredaccount view from CRM in to my source query, and removed the dblookup and seek steps from my DTS. Now instead of having to seek for the id of a related record, I have the GUID in my source data, and write that directly to the accountid field.
3. Given that the legacy data contains unique records that do not yet exist in CRM, I made my migration DTS an insert step.
The result is that the DTS now only hits the source and the target one time. The difference in performance is pretty astounding. The migration DTS now moves 50 records per second, taking what would have otherwise taken days to hours.
The final thing you can do is to thread your job. As you may know, jobs run through XML message queues are threaded, meaning multiple operations happen simultaneously, but jobs run through the workbench are not threaded. If your on-going integration is going to be a high data volume, using message queues is a good way to go.
However, setting up xml based DTS for a one time migration can add a level of complexity. Another option to thread your jobs is to break your migration DTS into multiple DTS’s and run multiple instances of the workbench simultaneously. In this case, I made a copy of my migration DTS. In the first DTS I filtered the source query to just records created prior to 1/1/2008. In the second DTS, I filtered the source query to just records created on or after 1/1/2008. I then ran them at the same time.
You are still constrained by the resource limitations on your server and the CRM web services, but by following these steps you can optimize your migration processes to run in a reasonable amount of time, without having to resort to unsupported methods, like direct table updates.
January 04, 2011
So you are moving to Microsoft Dynamics CRM, but you don’t want to lose all of your legacy data surrounding contacts, accounts, sales leads, or other business data. If you are replacing another system with a relational database, migrating data can be fairly straightforward—use an integration tool like Scribe Insight to import data from the legacy system, extract data to flat files and import with the CRM Data Migration Manager, or use the SDK to import directly through the CRM web services.
But what if your data is not in a relational database? Many small to mid-sized companies find that they have data stored in multiple spreadsheets or Outlook PST files. These cases present extra challenges because unlike migrating data from one source, migrating data from multiple little sources is not a straightforward process.
Some of the challenges include accessibility of data, data quality, and duplication.
For example, Acme company has a bunch of legacy data stored in Outlook PST files. They want to bring the data in to Microsoft CRM. What are their options?
You could just track them in CRM. Given that Microsoft Dynamics CRM for Outlook integrates CRM with Outlook, one option is to just have users track their old emails/appointments/contacts in CRM; however, in most cases, this is usually not the best option. Only a subset of contact fields synchronize from Outlook to CRM. If any fields like category or any custom fields are used in Outlook, these values will not be tracked to CRM. If you have a large number of users, most likely some of them will include the same contacts in their Outlook Contacts. If all users track all of their contacts, contacts will be duplicated in CRM, and potentially the more recent e-mail or phone number may get overwritten by an older version.
In most situations, it is preferable to extract all Outlook data, clean/de-duplicate it in Excel or SQL Server, then import the scrubbed data in to CRM. This is sometimes easier said than done. There are several ways to do it:
1. Outlook import/export wizard
Outlook includes a standard Import/export wizard, which enables exporting of data to flat files. In Outlook 2010, it is somewhat hidden—go to the File tab of the ribbon, then under the “Open” ribbon you will find an “Import” option. This also does exports.
From the import/export wizard, you can export Outlook data to csv, txt or other formats. The major limitation of the export wizard is that it will not export custom fields. If your data is only using standard fields, it works great, but if your data in Outlook uses any custom (user-defined) fields, it will not be sufficient.
If you read the CEI blog, you will see that we strongly recommend Scribe for CRM integrations and migrations. It has a fantastic CRM adapter and works with just about any data source for very painless integrations and migrations, and Scribe has an Outlook/Exchange adapter that can extract data from PST files.
However, the Scribe adapter for Outlook requires Microsoft Collaboration Data Objects to connect to Outlook and Exchange. CDO was a standard component of Microsoft Outlook through Outlook 2003. Starting with Outlook 2007, they took it out of the product but still made it available as a download. With Outlook 2010 CDO is no longer supported. The installer for CDO will not run if you have Outlook 2010 installed. So the Scribe option is good if you have Outlook 2007, but is not an option if you are on 2010. Scribe is still probably the best option for bringing in the data to CRM once you have it extracted from Outlook, but in this case, not the best option for the data extraction.
3. Copy and paste to Excel
If you are looking at your Outlook data in a tabular view, you can select multiple records and copy then paste in to Excel. This can be a good option for small data sets, especially contacts, where most fields contain just one line of data. When you paste it in Excel, it typically will paste neatly.
The limitations of this approach are that it can be very tedious if you have large data sets in Outlook. If someone has 200,000 contacts, you won’t be able to easily copy and paste all of them in one step. Also, if you are copying data, like the email body field, that have multiple rows of data, when you paste to Excel, the multiple rows will not neatly paste into the spreadsheet columns.
4. Microsoft Access
This falls in to the category of something I used to know. Not having worked with Access for years, I forgot that Access is the only Microsoft Office application that can import data directly from Outlook PST files. In Access 2010, go to the “External Data” tab and select More—>Outlook Folders.
You can then select any folder from Outlook and import as a table in an Access database, from which you can clean your data prior to importing to Dynamics CRM. I’ve found that this option works great in most cases, but may have some issues with exceptionally large pst files or accessing pst files that are not stores on the local pc.
This is a fantastic free application that makes exporting Outlook PST data very easy. It can extract all types of data, including custom fields to CSV files, and gives full control of what columns get extracted.
I found that the options for exporting email party and date fields especially useful
Date fields are available as date/time, date only, or time only. Email parties, such as From, To, CC offer both the address or the display name. This granularity can help to pre-parse the data in a format that will be more convenient to import to CRM.
So the answer to the question “How do we get our legacy data from Outlook?” is “It depends . . . “ If your data is simple data in small quantities, 1-3 may work, but if you have custom fields or large PST files, you will want to try option 4 or 5.
December 08, 2010
When integrating Microsoft Dynamics CRM with other business systems, such as ERP,there are several best practices that you can follow to optimize performance of your integration. These tips apply when using Scribe Insight, or any other integration tool.
1. Minimize trips to the source and target. When integrating systems, one of the factors that can impact integration performance is excessive use of lookups to either the source and target. This can happen, for example, when a record is related to another record, such as a contact/parent customer relationship, and you need to retrieve the CRM ID of the parent record. If you have many lookups in your integration, the performance of the integration will suffer.
There are several approaches to reduce or eliminate lookups in your integration:
- Scribe has a feature called “Key cross reference.” The idea is storing a cross reference table mapping keys from the source and target in the internal database so retrieving keys for lookups does not require a trip to the source or target database.
- Join the target in your source query—If both databases are on the same server, or can be joined with a linked server, a good option can be to join the target with the source in your integration source query—this puts all of the heavy lifting on the data query, and eliminates the need to do database lookups during the integration.
- Store the record ID in a hidden field in your source—this option works especially well for integrations where another system than CRM is the source of a particular type of record. For example, in a GP-CRM integration, customer accounts will typically only be updated in the ERP system. By storing the GP ID of the record on the CRM record, we can reduce lookups to the GP system—in essence, CRM is our key cross reference. This also serves the benefit of providing a more precise key on which to map for updates. If the update integration matches the source and target using account name or even account number, problems can arise should any of those values change. By storing the ERP ID of the record in CRM, we can precisely match records even if the name or account number is updated in GP.
2. Duplicate picklist option values
If your source or target systems make use of picklists, design the picklists so the option numbers match between systems. If you have a customer type code in Dynamics GP, by making the values and the labels of the picklists precisely match the values and labels of the same picklists in Dynamics CRM, you can simplify and optimize the integration. If you use Scribe, you can now disable the CRM adapter picklist validation. No matter what integration tool you use, you will now be able to do a straight source to target data map, without having to transform the falues.
3. Map user ID’s
When you import data into and out of CRM, it is important to not lose visibility for what users created or updated the records; however, this can be a challenge if the other system does not use active directory. The record in the ERP system will be created by Broth, but in CRM, Bill’s userid will be domain\bill.roth. To provide a cross reference for user id’s, consider adding an attribute to the CRM User record to contain the user id from the integrated system. This will enable you to quickly map users between systems for things like account assignments. Make sure that you make this required, or have a process in place to alert you should a user be created without the other id, otherwise, your integration will not be able to correctly populate records for that user in the integration.
September 28, 2010
When working with Scribe, there are some common issues that you may run into. I’ve found that following a troubleshooting strategy can help to quickly resolve these issues. Here are the top 3 that I run into or hear about from clients:
1. Job runs in the Workbench, but not in the Console.
Consider this scenario—you create a Scribe DTS and test it in the Scribe Workbench. The job runs without error. When you schedule the job to run in the Scribe Console, the job fails
The reason for this error is typically related to database permissions. When you run a SQL DTS in the workbench, Scribe connects to ODBC connections using the credentials of the running user. When a Scribe DTS is executed by the Console, it runs using the credentials of the Windows account that the Scribe Services run as. So if that account does not have permission to read the source database, the job will fail when run by the Console.
Also, the Windows account that runs the Scribe Services must also have permissions for the Scribe Schema in the SCRIBEINTERNAL database. In SQL Management Studio, expand the security node under the Scribeinternal database and double click the user record for the execution account. Under “Schemas owned by this user” select the SCRIBE schema.
2. Generic SQL error
This can be the most frustrating error. Rows fail and the only error message is “Generic SQL Error.” The main reason for this error is that there is some data constraint on the target that is being violated by the data inserted. It’s important to underscore that Scribe is mainly passing along errors generated by the source or target database. If the target gives generic errors, the scribe error message will also be generic. You will sometimes see this when inserting through the CRM adapter if there is a field length or data type issue.
The best way to troubleshoot this is through a process of elimination. Here are the steps I follow:
- Make a copy of your dts.
- Remove all data links except for the required fields.
- Run the DTS filtered to the row that was rejected
- If the dts succeeds, add 1-2 additional data links and run the job again.
- Continue this process until you receive the error message again. This will identify which row is the problem.
- Once you identify the problem field, create a record via the UI for CRM inserting the value from the source record. This will typically tell you what the problem is.
An example of where I recently saw this. I was importing contacts, and 3 of them from Australia failed. After verifying that the mapping was correct and viewing the scribe event logs to see if there was more information, I followed the troubleshooting process above. When I got to the Address 1: Line 1 field, I got the error, so that told me that was the problem field.
By manually creating a record through the front-end of CRM, I found that the issue was that the user had increased the field length of the Street 1 field to 200 characters. While this is not typically a problem (as pointed out in this post), in this case it was due to the way that address fields work in CRM. The address fields on the account and contact are not actually stored in the account and contact entities, but rather in the address entity. That means if you increase the field length of CRM address fields on the contact record, you must also increase the length in the address entity.
After increasing the length in the address entity, the records inserted without error.
3. Field doesn’t update
Consider this scenario—the job runs, and records are updated, but one field in particular does not get updated.
In the Scribe Workbench, when you view your data links, there is a column with the heading “Overwrite.” This column indicates whether an update step should overwrite an existing value in the column. If there is a * in the column, the field will be updated with new values, if it is not, the existing values will not be updated.
To check or uncheck the “Overwrite” bit, just click on the “overwrite” column for the data link.
The issue is that it is very easy to inadvertently click on the “Overwrite” column when trying to double click the Formula column to edit a data formula. If you do this, the column will not get updated when a job runs.
Lesson learned—be careful where you click. If you want to change a formula, be sure to click on the formula column or select the row and click the “formula” button. Also, I like to give the data links a visual once over when I finish the DTS to make sure that I have not accidentally unchecked the “Overwrite” flag.
September 24, 2010
I’m in Bedford, NH for the inaugural Scribe User Conference. I’ve never been to New Hampshire before, and it is a very beautiful place to visit during the fall.
During the keynote address yesterday, Scribe CEO Lou Guercia announced a new product, codenamed “Denali,” that will be Scribes first cloud-based offering. The initial release, slated for the end of this year, is going to function as a database replication tool for CRM Online users, allowing them to replicate their CRM Online databases to a local database, which they can then use for reporting and analytical purposes, or to integrate their crm data with on-prem systems.
Scribe Online is going to be hosted on Azure, and the initial base product will be free.
July 06, 2010
When looking at data migration or integration options, a frequent question is why should we use Scribe instead of Biztalk? Scribe has created a very detailed and unbiased white paper that compares the two solutions and explains when you should use Scribe and when you should use Biztalk. Their point about the adapters is very good--the Biztalk adapters require a fair amount of coding to implement, while the Scribe adapters provide a more complete solution for data driven integrations.
You can read more about it and download thermite paper here: http://blog.scribesoft.com/2010/07/scribe-or-biztalk-you-ask.html
June 28, 2010
On a recent client implementation we came across a challenge connecting to an Oracle 10g database via ODBC on a 64-bit machine with Windows Server 2008. We initially attempted to connect to the database through the Scribe 5.3 Oracle driver with the Oracle 10.2.0 client installed. The jobs were setup and tested through the Scribe Workbench and ran interactively with no problems.
Once the testing was complete, we setup the new collaboration and chose to run the .dts on a timed basis. However, every time the Console attempted to run the job, Scribe threw an "Error connecting to data source" message. The bottom line: the jobs ran from the Workbench, with no user name or passwors prompts, but not from the Console.
The solution: Upgrade the Oracle ODBC client to the most recent version, 11.2.0. It appears that there is a conflict when using an older Oracle ODBC client and Scribe 5.3 Oracle driver, particularly on a Windows 2008 64-bit server.
You can download the latest version of the client from their website here.
June 23, 2010
Sometimes in a large deployment of Microsoft Dynamics CRM, it can be a challenge to create users. When you have 25-50 users, it’s not hard to manually create each user record, or to use the add multiple users tool to create user records. However, when your user count is in the hundreds or thousands, even the add multiple tool can be a very tedious process.
Many of our customers use Scribe Insight to migrate data into CRM, or to integrate CRM with other business systems; however, not many are aware that you can also use it for creating users records. Here is the simple process that you would use to do this:
1. Create a spreadsheet with the domain credentials of the users that you want to import. I recommend saving the spreadsheet in CSV format, as it requires the least level of file manipulation.
2. Create a DTS in the workbench with the CSV file as your source and the CRM adapter as your target.
3. Add an insert step creating the systemuser records. Map the domainid field of the source csv to the domainname field of the CRM system user entity.
At a basic level, that’s all there is to it. in addition to this process, you could also include the role and have a second step to create the user role record; however, if you don’t have that, it’s not a big deal because it’s pretty easy to bulk assign users to roles after the user record is created. Also, if other attributes like first name, last name, email, phone are present in AD, they should automatically populate in the user record. You can include them in your mapping if they are not present in AD, or if you desire them to be something different than their values in AD.
This is also a useful capability for companies looking to maintain a test and production environment that want to keep the user accounts synchronized between the two environments.
This is a great example of the extra value that Scribe can add to your CRM implementation, and the power it has to import data to areas where the out of the box tools cannot go.
June 10, 2010
If you import data into CRM from CSV files via the CRM Import Untility or a tool like Scribe Insight, one frequent headache is columns with leading zeros. This most frequently comes up in postal codes or phone numbers that don’t have any formatting characters, just numbers.
When you have an Excel spreadsheet and save as text or .csv format, the leading zeros are preserved. So if the value in your spreadsheet is 09352, when you export to CSV, it will still be 09352, and if you import to CRM, the leading zero will be there.
If you reopen the csv file in Excel, it interprets the data to determine what format the column should be—general, numeric,etc. So when it sees the 5 digit postal code with leading zero, it interprets it as a number, not displaying the leading zero. It will display the leading zero if you have the full postal code with a hyphen and four trailing characters, like 09352-1111, it will display the leading zero, because the presence of the hyphen tells it that it is a text value, not a numeric value.
The problems arise if you edit the csv file in Excel and make changes. If you reopen the csv file and save it, the leading zeros will be stripped out of the csv file, and your import will create postal codes without the leading zero.
To avoid leading zero import issues, I recommend the following best practices:
1. Make all edits in Excel format, then save to CSV.
2. If you need to edit the csv file, do so in a text editor, such as notepad.
3. If you still have issues, edit the trouble rows in notepad and place double quotes around the values with leading zeros.
May 25, 2010
--This is a post that I recently wrote for the Scribe Integrator blog, which is a great blog for all things Scribe Insight.
When installing software on Windows 2008, you typically want to install it as an administrator. I’m not just talking about the user account having local administrator rights—with Windows User Account Control on Windows Server 2008 (and also on desktop versions of Windows Vista and Windows 7), you have to right click on an .exe file and select “Run as Administrator” if you want the installation to run successfully. This applies to almost any application, including Microsoft Dynamics CRM, Scribe, SQL Server, and others.
Of course, you can disable User Account Control; however, frequently this is not an option, especially if you aren’t the owner of the server on which the software is being installed.
WithScribe Insight installs, I have found that you always want to run the installation as an administrator; however, there are a few more considerations.
The first time you run Scribe Workbench
The first time you run the workbench and register the application, it is also a good idea to run the application as an administrator (right click on the icon for Scribe on your desktop and select “run as administrator.” If you don’t I have seen error messages, or have seen the registration not “take” and require the registration wizard to be re-run next time you open the application. After the application runs for the first time and the product is registered, running as administrator is typically not required.
The first thing I always do after installing Scribe Insight and registering the product is to install the 6.5.2 update. This update is the most recent update to Scribe Insight, and it has a number of very important updates and fixes. Like with the application installation, you want to run the update install as an administrator. Otherwise, you may see errors like this:
But how do you run the update as administrator? Since the update is an MSI file, when you right click, there is no “Run As Administrator” option. There are several ways to do it, this is my favorite and probably the least painful way to do it:
- Download the 6.5.2 update.
- Navigate to the downloaded file ScribeInsight.msi. Hold the shift key and right click on the file and select “Copy as path.”
- Open Notepad and paste the copied file path. Save the file, making the file extension “.bat.”
- Right click the .bat file and select “Run as Administrator.
The update will now install successfully.
March 10, 2010
In Microsoft Dynamics CRM, there is a unique lookup field called “Parent Customer” that is found on system entities such as Contacts and Opportunities. This is a lookup field that can look up an account or a Contact record.
When importing data into CRM using a tool like Scribe Insight, when you populate the parentcustomerid field, you also have to populate the parentcustomeridtype field. This field tells CRM whether to link to a contact or an account. You set it to 1 if it is an account, or 2 if it is a contact.
I recently was writing a data import using Scribe to insert Contacts. It was succeeding for many records, but for some it would fail on a few records. The error that I got on the failures was “ (Dynamics CRM) System.FormatException: Guid should contain 32 digits with 4 dashes (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).”
In this case I was populating the parentcustomerid field, and I was setting the parentcustomeridtype field to “1.”
After reviewing the failing rows, I realized that these contacts did not have parent companies in the source—the field was null. I added logic to the parentcustomeridtype field to only import if the parentcustomerid was null.
Lesson learned—if you populate the parentcustomeridtype field, you have to populate the parentcustomerid field.
November 20, 2009
I have had a chance to play with some of the new training materials from Scribe Software, and they are very impressive. They use a platform called vBooks, which is a very innovative way of combining text, video and animation to deliver a much more user friendly learning experience. I really wish this had been around when I went through Scribe certification a couple of years ago.
You can also register for a live webinar with Mark Walker, Director of Scribe Technical Resources, who will demonstrate Scribe's new training delivery through the use of vBooks.
Thursday, 12/3/2009 11:00 AM - 11:45 AM EST. Register here.
November 12, 2009
Many Microsoft Dynamics CRM users also use Scribe Insight for integration/migration services. – While it’s a great tool, one warning that I see occasionally is:
Subject: System alert: Scribe EventManager - The job was terminated because the Message Processor is unresponsive.
Unfortunately after this, other jobs fail and nothing seems to work until the Scribe Services are restarted.
The root of this issue is a long-running query in one of the Scribe DTSs – The Scribe Event Manager terminates job processing with a fatal error if a source row or a post DTS script takes longer than 60 seconds to process. This happens because Scribe Insight assumes that the process is hung and should be terminated.
The best fix is to review your source queries and see which ones are taking a long time to run and fixing those. Many times these queries can be optimized to improve their performance with just a few tweaks or indexes – or by converting the query to a View in SQL and then using the view as the source – my preference.
A workaround is to increase the time Scribe waits before giving up on the slow source query. (I’d still look to improve the performance of the source query, but sometimes you have to get it running now, then resolve the cause later...)
I found this in the Scribe forums recently (https://openmind.scribesoftware.com/) – I verified with Scribe support that it is still valid way to increase the timeout for the message processor. (At least through Insight version 6.5)
Warning: Always be sure to back up your Registry before attempting to use Registry Editor. If you use Registry Editor incorrectly, you may cause serious problems that may require you to reinstall your operating system. Scribe cannot guarantee that you can solve problems that result from using Registry Editor incorrectly. Use the Registry Editor at your own risk.
1. On the Scribe server open the Registry Editor and go to HKEY_LOCAL_MACHINE\SOFTWARE\Scribe\EventManager\Settings.
2. Right-click on the Settings folder, on the shortcut menu, point to New and then click DWORD Value.
3. Name the new value ProcHangTimeout.
4. Right-click on the new value, on the shortcut menu, click Modify.
5. In the Edit DWORD Value dialog box, in the Value data box, type 600. This is the amount of time, in seconds, to wait before terminating the process.
6. In the Base box, choose the Decimal.
7. Restart the Scribe services for this setting to take effect
* Be aware that Scribe Insight Updates can wipe out these keys, so make sure to check post-upgrade to see if they need to be re-added.
November 06, 2009
Lately have have found myself doing a lot of Scribe work, much of it on Windows Server 2008. I’ve found several suggestions that make the installation of Scribe in Windows 2008 go more smoothly:
1. Install Insight 6.5.2 update. This will fix many of the issues with running Scribe in Windows 2008. Download it here.
2. Turn off UAC. By default, User Account Control is enabled in Windows 2008. During the installation of Scribe Insight or the 6.5.2 update, the installer makes changes to program files and registry settings—things that are protected by UAC. Turning off UAC during the installation will eliminate many of problems. Post installation, UAC can be re-enabled, if you desire to do so.
3. Change the login account used by the Scribe Services. I have seen multiple instances where the Scribe Services won’t run on Windows 2008 under the default windows service. I recommend setting up an AD account with non-expiring password and setting the services to log on as this account. It important to use a non-expiring password for this account, otherwise your integration processes will break when the password changes. You will also need to make sure that this account has database owner rights of the ScribeInternal database.
September 22, 2009
Our friends at Scribe Software are offering a free webinar October 1 at 11 AM EST on the topic of optimizing Scribe Insight performance.
Eddie Cole, Senior Software Engineer
John Gravely, Vice President of Marketing and Product Marketing for Scribe Software
Brendan Peterson, Technical Support Analyst
This will be a great session for anyone who uses Scribe and wants to optimize their integration processes.
For a preview, see Brandon’s blog post from the Scribe blog Getting the biggest bang for your buck--Improving Scribe Insight Performance.
May 06, 2009
Webcast: Scribe Web Services Adapter Product Launch: Overview and Demonstrations – Thursday, May 14 @ 11am – 12pm EST
This webcast will cover the newly released Web Services Adapter for Scribe Insight and Scribe Insight Enterprise. The webcast will include a feature overview and several demonstrations of the Web Service Adapter enabling integration to Dynamics AX, Microsoft SharePoint and 3rd party commercial web services. Click to read the Adapter for Web Services data sheet.
Scribe Presenters for the webcast will be:
- Bob Sturim - Vice President, Products
- John Gravely - Vice President, Marketing and Product Marketing
- Lou Antonucci - Vice President, Sales
- Tomas O’Brien - Product Manager
Please register at the following registration URL https://www.livemeeting.com/lrs/8001615779/Registration.aspx?pageName=rx9xxrf8tbj9g3rz
April 06, 2009
The guys at Scribe Software have released a 14-page overview containing more details about Scribe 6.5. This really shows a commitment to listening to feedback from customers and partners and making improvement to an already great product.
April 04, 2009
We are pretty excited about the release of Scribe Insight 6.5. This release includes several new capabilities that we have wanted for a long time—here are some of the highlights:
- Support for Windows authentication
- Failover/Disaster Recovery
- Web Services Adapter
- Support for 64-bit Windows Server 2008
The folks at Scribe Software will be hosting a webcast to provide product overview information as well as detailed feature information and demonstrations.
Scribe 6.5 Product Launch Thursday, April 16 @ 11am - 12pm EST
This webcast will provide a first look at the new release with the opportunity to ask questions. Register here.
The Scribe presenters for the webcast will be:
- Bob Sturim - Vice President, Products
- John Gravely - Vice President, Marketing and Product Marketing
- Lou Antonucci - Director, Sales
March 23, 2009
Earlier this month I had the pleasure of participating in several sessions at Convergence 2009 in New Orleans. One of my favorites was a CRMUG session on data migration and integration. I was on a roundtable panel with Tim Thorpe and Leslie Guffey from YRC Logistics and Brendan Peterson from Scribe Software.
There were a lot of great questions asked, and I learned several tips that can really help speed up DTS performance for migrations and integrations with Scribe.
1. Multiple DTS--One thing I've always wondered about is what is the effect of running multiple instances of the Scribe workbench on one machine? For example, if you have three data loads that you need to import, is it faster to run them in parallel, or would it take just as long as running them separately?
Brendan confirmed that it is supported to run multiple instances of the workbench simultaneously, and that if you have a powerful server with multiple processors/cores, this will be much faster than running each job individually.
2. Picklist validation--Like it sounds, this is a feature in the Microsoft CRM adapter that validates that picklist values are valid. This also adds some overhead to the DTS. If you know that the picklists values in your source are valid, or if your source does not contain any picklist data, disabling picklist validation can significantly decrease DTS execution time. To disable picklist validation, In Scribe Workbench click Target then select "Adapter Settings." on the General Settings tab uncheck the "Validate Microsoft Dynamics CRM picklist fields."
3. Increase batch query size--This is the size of the cache used for queries. If you have a beefy server with lots of ram/processor available, you can increase this value as high as 5,000, and this can improve the performance of your DTS. Query Batch size settings are located in the Adapter general settings (same place as #2)
Your results may very based on your specific DTS and the available resources in your environment. Have any additional data migration tips (doesn't have to be Scribe related)? Leave a comment or send me a message and I will post a follow-up.