Data Management

Objectives
By the end o this module, you should understand the fundamentals of data management, including:
1.Explain typical data management operations.
2.Describe typical user cases for inserting system fields.
3.List the ways to obtain record IDS.
4.Perform mass transfer of records.
5.describe External IDs.
6.Explain the basics of object relationships.

Data Management Operations
1.Data management:
- Is an on-going process.
- Is not a one-time task.
2.Various data management operations include:
- Exporting data
- Deleting data
- Insert data
- Updating data
- Upserting data

Exporting Data
1.Data can be exported from Salesforce into a set of CSV files.

2.Data is exported to:
- Create backups.
- Get reference IDS for records.

Deleting Data
1.Data can be deleted from Salesforce using the deleting data operation.
2.This operation is used to:
- Free up the space used by out of data or bad data.
- Fix mistakes.
3.On deletion, data is moved to the Recycle Bin.

Insert Data
1. This insert data operation can be used to load data into Salesforce
2.This operation can be used for:
- Initial Salesforce setup
- Migrating data from legacy systems.
- Load data to sandbox.

Inserting System Fields
1.Salesforce allows the use of inserting System Fields feature to set the system fields.
2.Inserting System Fields:
- Works only at the time of setting a record's system fields.
- Are accessible only through the API-based management tools.
- Work for all custom objects.
- Restricted to Account, Opportunity, Contact, Lead, Case, task, and Event standard objects.

Updating Data
1.The updating data operation is used for:
- Add data to existing records.
- Transfer ownership of records to a different user.
2.Salesforce record IDs are required to perform this operation.

Salesforce Record IDs
1. Salesforce records ID is:
- An ID value generated by Salesforce when a new record is created.
- A unique identifier of the record.
- Analogous to a primary or foreign key field in a database table.
2.Saleforce.com record IDs exist in two forms:
- The 15-digit case sensitive form.
For example:005E000000KF38
- The 18-digit case insensitive form.
For example:005E0000000KF38IAG

Record IDs Introduction
You can access record IDs in four ways:
1.From URL(15-digit ID)
2.From a report
-In a report, record IDs for all records are displayed in a separate column.
- Reports return the 15 digit-ID
3.Through the SOAP-based Web Service API
- All record IDs are obtained when records are extracted using the Web service API.
4.Through formulas
- You can access record IDS by creating a formula(5-digit ID)

Mass Transferring Records
1. The mass transfer tool is used to transfer multiple accounts, leads, or custom objects from one use to another.
2.To transfer records, following permissions are required:
- "Transfer Records" of "Transfer lead".
- "Edit" on the specified object.
- "Read" on the records being transferred.

Upserting data
1.Migrate new and existing records from a legacy system to Salesforce.
2.Upsert data operation is useful to identify duplicate entries for the same ID.
3.This operation uses Salesforce ID or external ID to create a new record of update an existing record as follows:
- If the ID is not matched, a new record is created.
- If the ID is matched once, the existing record is updated.
- If the ID is matched multiple times, an error is reported.

External IDs
1.External IDs are useful while migrating data or integrating data between legacy system and Salesforce.
2.External ID:
- Is a flag that can be added to a custom field.
- Can be created for any custom field of type text, number, or email.
- Each object can have up to three external IDs.

Upsert with object Relationships
1. Relationships existed between objects.
2.Object relationships affect the order in which data can be managed.
3.Relationship are expressed through:
- Replaced lists and look-up in the application.
- Foreign keys or IDs in the database.
4.The upsert function allows to load data with their relationships using the external IDs.

Steps to Load Data
1.Object relationships introduce data dependencies.
2.Dependencies dictate the order of data load.

Summary
1.Data management is an on-going process to keep data in your application up-todate,accurate, and clean.
2.To manage data effectively and efficiently, to perform data management operations such as exporting, inserting, updating, upserting and deleting data.
3.When inserting data, you have the option of using inserting system fields.
4.Saleforce generate an id value when you create a new record and add data.
5.Record IDs describe object relationships that define the order in which data can be managed.

Data Management Tool

Objectives
By the end of the module, you should be able to:
1.List available tools to perform data management operations.
2.Use the Data Loader to perform data management operations.
3.Define the Bulk API and its use cases.

Tools for Data Migration
Data van be migrated into Saleforce using:
1.Import Wizards, or
2.Web service API.

Import Wizards
1.Are easy to use
2.Can be used to load up to 50,000 records.
3.Load accounts, contracts, leads, solutions, or custom objects.

API-Based tools
1. Load any object support by the API
2. Load more that 50,000 records
3.Schedule regular data loads such as nightly feeds
4.Export data for backup
5.Delete multiple supported objects at the same time
6.Tools that function through Web Services API include;
- Data loader
- Partner tools
- Custom-biult tools
- Open Source tools

Data Loader:
1.Is a Salesforce product.
2.Supports data import from and export to a CSV files.
3.Support data load from and export to a database through JDBC.
4.Supports custom relationships for upsert.
5.Can be run from command line.
6.Can be run in batch mode.

Obtaining the Data Loader
The data Loader:
1.Can be downloaded by System Administrators.
2.Is available in UE, EE, and DE orgs.

Other Avaiable API Data Management Tools
1.Additional data Management tools can be obtained from:http://develper.force.com

Bulk API
The Bulk API:
1.Is used to load high-volume data.
2.Is optimized to perform insert, update, upsert, or delete operation on larger number or records.
3.Improves throughput when loading large data sets due to parallel processing.
4.Can be monitored by navigating to Monitoring section in the Setup menu.

Working of the Bulk API
1.Data is transferred at full network speeds, reducing dropped connections.
2.The whole data set is managed in a job that can be monitored and controlled from Setup menu.
3.The data set van be processed faster by allocating multiple servers to process in parallel.

Using Data Loader with the Bulk API
1.Data Loader uses SOAP-based Web services by default.
2.To use the Bulk API, enable the Bulk API option.
3.Salesforce provides an additional serial mode for Bulk API.
4.With serial mode, batched can be processed one at a time.
5.Hard deletes can be performed using the Hard Delete.

Summary
1.Data can be managed either using the import wizards, or through the API.
2.The import wizards do not require any programming or developer skills.
3.API-based tools can be used to schedule regular data loads such as nightly feeds, export data for backup, and delete multiple supported objects data.

Data Management and Data Management Tools的更多相关文章

  1. Coursera, Big Data 2, Modeling and Management Systems (week 4/5/6)

    week4 streaming data format 下面讲 data lakes schema-on-read: 从数据源读取raw data 直接放到 data lake 里,然后再读到mode ...

  2. Coursera, Big Data 2, Modeling and Management Systems (week 1/2/3)

    Introduction to data management 整个coures 2 是讲data management and storage 的,主要内容就是分布式文件系统,HDFS, Redis ...

  3. Datasets for Data Mining and Data Science

    https://github.com/mattbane/RecommenderSystem http://grouplens.org/datasets/movielens/ KDDCUP-2012官网 ...

  4. SQL data reader reading data performance test

    /*Author: Jiangong SUN*/ As I've manipulated a lot of data using SQL data reader in recent project. ...

  5. Desktop Management Interface & System Management BIOS

    http://en.wikipedia.org/wiki/Desktop_Management_Interface Desktop Management Interface From Wikipedi ...

  6. 【转】浏览器中的data类型的Url格式,data:image/png,data:image/jpeg!

    所谓"data"类型的Url格式,是在RFC2397中 提出的,目的对于一些"小"的数据,可以在网页中直接嵌入,而不是从外部文件载入.例如对于img这个Tag, ...

  7. 初探 spring data(一)--- spring data 概述

    由于自己一个项目要用多到Sql与NoSql两种截然不同的数据结构,但在编程上我希望统一接口API,让不同类型的数据库能在相同的编程接口模式下运作.于是找了一个spring的官网,发现一个spring ...

  8. OCM_第二十天课程:Section9 —》Data Guard _ DATA GUARD 搭建/DATA GUARD 管理

    注:本文为原著(其内容来自 腾科教育培训课堂).阅读本文注意事项如下: 1:所有文章的转载请标注本文出处. 2:本文非本人不得用于商业用途.违者将承当相应法律责任. 3:该系列文章目录列表: 一:&l ...

  9. OCM_第十九天课程:Section9 —》Data Guard _ DATA GUARD 原理/DATA GUARD 应用/DATA GUARD 搭建

    注:本文为原著(其内容来自 腾科教育培训课堂).阅读本文注意事项如下: 1:所有文章的转载请标注本文出处. 2:本文非本人不得用于商业用途.违者将承当相应法律责任. 3:该系列文章目录列表: 一:&l ...

随机推荐

  1. flume install

    flume install flume 安装 123456 [root@10 app][root@10 app]# mv apache-flume-1.7.0-bin /mnt/app/flume[r ...

  2. 在WPF(core版本)中引用外部字体不可用问题说明

    这几天使用WPF写软件,想引用外部字体,于是下载了字体文件: 然后在App.xaml中添加了如下代码: <FontFamily x:Key="Digital-7 Mono"& ...

  3. 从头认识js-基本概念(关键字,保留字,数据类型)

    语法 ECMAScript的语法大量借鉴了C及其他类C语言(如Java和Perl)的语法.因此,熟悉这些语言的开发人员在接受ECMSAScript更加宽松的语法时,一定会有一种轻松自在的感觉. 区分大 ...

  4. Java设计模式二

    今天谈的是工厂模式,该模式用于封装和对对象的创建,万物皆对象,那么万物又是产品类,如一个水果厂生产三种水果罐头,我们就可以将这三种水果作为产品类,再定义一个接口用来设定对水果罐头的生成方法,在工厂类中 ...

  5. LeetCode:两数之和、三数之和、四数之和

    LeetCode:两数之和.三数之和.四数之和 多数之和问题,利用哈希集合减少时间复杂度以及多指针收缩窗口的巧妙解法 No.1 两数之和 给定一个整数数组 nums 和一个目标值 target,请你在 ...

  6. a++与++a的陷阱与盲区

    故事发生在2019.7.15的3.pm,正在复习数据结构题目,写了一句如下指令(以下函数运行在win10的dev编译器下面) #include<bits/stdc++.h> using n ...

  7. Redis(7)——持久化【一文了解】

    一.持久化简介 Redis 的数据 全部存储 在 内存 中,如果 突然宕机,数据就会全部丢失,因此必须有一套机制来保证 Redis 的数据不会因为故障而丢失,这种机制就是 Redis 的 持久化机制, ...

  8. django数据库分库migrate

    最近在研究微服务和分布式,设计到了数据库分库,记录一下 首先,创建多个数据库,如果是已经生成的数据库,可以分库,这里我是新创建的项目,刚好可以用来尝试 我是用docker创建的mysql数据库容器 拉 ...

  9. 编译 ijg JPEG V8 库 GIF 库

    libjpeg-turbo-1.2.1太老了,不支持,从内存解压,这里编译支持 jpeg_mem_src 的 JPEG V9 wget http://www.ijg.org/files/jpegsrc ...

  10. MySQL字符集不一致导致性能下降25%,你敢信?

    故事是这样的: 我在对MySQL进行性能测试时,发现CPU使用率接近100%,其中80%us, 16%sys,3%wa,iostat发现磁盘iops2000以下,avgqu-sz不超过3,%util最 ...