Data Management

Objectives
By the end o this module, you should understand the fundamentals of data management, including:
1.Explain typical data management operations.
2.Describe typical user cases for inserting system fields.
3.List the ways to obtain record IDS.
4.Perform mass transfer of records.
5.describe External IDs.
6.Explain the basics of object relationships.

Data Management Operations
1.Data management:
- Is an on-going process.
- Is not a one-time task.
2.Various data management operations include:
- Exporting data
- Deleting data
- Insert data
- Updating data
- Upserting data

Exporting Data
1.Data can be exported from Salesforce into a set of CSV files.

2.Data is exported to:
- Create backups.
- Get reference IDS for records.

Deleting Data
1.Data can be deleted from Salesforce using the deleting data operation.
2.This operation is used to:
- Free up the space used by out of data or bad data.
- Fix mistakes.
3.On deletion, data is moved to the Recycle Bin.

Insert Data
1. This insert data operation can be used to load data into Salesforce
2.This operation can be used for:
- Initial Salesforce setup
- Migrating data from legacy systems.
- Load data to sandbox.

Inserting System Fields
1.Salesforce allows the use of inserting System Fields feature to set the system fields.
2.Inserting System Fields:
- Works only at the time of setting a record's system fields.
- Are accessible only through the API-based management tools.
- Work for all custom objects.
- Restricted to Account, Opportunity, Contact, Lead, Case, task, and Event standard objects.

Updating Data
1.The updating data operation is used for:
- Add data to existing records.
- Transfer ownership of records to a different user.
2.Salesforce record IDs are required to perform this operation.

Salesforce Record IDs
1. Salesforce records ID is:
- An ID value generated by Salesforce when a new record is created.
- A unique identifier of the record.
- Analogous to a primary or foreign key field in a database table.
2.Saleforce.com record IDs exist in two forms:
- The 15-digit case sensitive form.
For example:005E000000KF38
- The 18-digit case insensitive form.
For example:005E0000000KF38IAG

Record IDs Introduction
You can access record IDs in four ways:
1.From URL(15-digit ID)
2.From a report
-In a report, record IDs for all records are displayed in a separate column.
- Reports return the 15 digit-ID
3.Through the SOAP-based Web Service API
- All record IDs are obtained when records are extracted using the Web service API.
4.Through formulas
- You can access record IDS by creating a formula(5-digit ID)

Mass Transferring Records
1. The mass transfer tool is used to transfer multiple accounts, leads, or custom objects from one use to another.
2.To transfer records, following permissions are required:
- "Transfer Records" of "Transfer lead".
- "Edit" on the specified object.
- "Read" on the records being transferred.

Upserting data
1.Migrate new and existing records from a legacy system to Salesforce.
2.Upsert data operation is useful to identify duplicate entries for the same ID.
3.This operation uses Salesforce ID or external ID to create a new record of update an existing record as follows:
- If the ID is not matched, a new record is created.
- If the ID is matched once, the existing record is updated.
- If the ID is matched multiple times, an error is reported.

External IDs
1.External IDs are useful while migrating data or integrating data between legacy system and Salesforce.
2.External ID:
- Is a flag that can be added to a custom field.
- Can be created for any custom field of type text, number, or email.
- Each object can have up to three external IDs.

Upsert with object Relationships
1. Relationships existed between objects.
2.Object relationships affect the order in which data can be managed.
3.Relationship are expressed through:
- Replaced lists and look-up in the application.
- Foreign keys or IDs in the database.
4.The upsert function allows to load data with their relationships using the external IDs.

Steps to Load Data
1.Object relationships introduce data dependencies.
2.Dependencies dictate the order of data load.

Summary
1.Data management is an on-going process to keep data in your application up-todate,accurate, and clean.
2.To manage data effectively and efficiently, to perform data management operations such as exporting, inserting, updating, upserting and deleting data.
3.When inserting data, you have the option of using inserting system fields.
4.Saleforce generate an id value when you create a new record and add data.
5.Record IDs describe object relationships that define the order in which data can be managed.

Data Management Tool

Objectives
By the end of the module, you should be able to:
1.List available tools to perform data management operations.
2.Use the Data Loader to perform data management operations.
3.Define the Bulk API and its use cases.

Tools for Data Migration
Data van be migrated into Saleforce using:
1.Import Wizards, or
2.Web service API.

Import Wizards
1.Are easy to use
2.Can be used to load up to 50,000 records.
3.Load accounts, contracts, leads, solutions, or custom objects.

API-Based tools
1. Load any object support by the API
2. Load more that 50,000 records
3.Schedule regular data loads such as nightly feeds
4.Export data for backup
5.Delete multiple supported objects at the same time
6.Tools that function through Web Services API include;
- Data loader
- Partner tools
- Custom-biult tools
- Open Source tools

Data Loader:
1.Is a Salesforce product.
2.Supports data import from and export to a CSV files.
3.Support data load from and export to a database through JDBC.
4.Supports custom relationships for upsert.
5.Can be run from command line.
6.Can be run in batch mode.

Obtaining the Data Loader
The data Loader:
1.Can be downloaded by System Administrators.
2.Is available in UE, EE, and DE orgs.

Other Avaiable API Data Management Tools
1.Additional data Management tools can be obtained from:http://develper.force.com

Bulk API
The Bulk API:
1.Is used to load high-volume data.
2.Is optimized to perform insert, update, upsert, or delete operation on larger number or records.
3.Improves throughput when loading large data sets due to parallel processing.
4.Can be monitored by navigating to Monitoring section in the Setup menu.

Working of the Bulk API
1.Data is transferred at full network speeds, reducing dropped connections.
2.The whole data set is managed in a job that can be monitored and controlled from Setup menu.
3.The data set van be processed faster by allocating multiple servers to process in parallel.

Using Data Loader with the Bulk API
1.Data Loader uses SOAP-based Web services by default.
2.To use the Bulk API, enable the Bulk API option.
3.Salesforce provides an additional serial mode for Bulk API.
4.With serial mode, batched can be processed one at a time.
5.Hard deletes can be performed using the Hard Delete.

Summary
1.Data can be managed either using the import wizards, or through the API.
2.The import wizards do not require any programming or developer skills.
3.API-based tools can be used to schedule regular data loads such as nightly feeds, export data for backup, and delete multiple supported objects data.

Data Management and Data Management Tools的更多相关文章

  1. Coursera, Big Data 2, Modeling and Management Systems (week 4/5/6)

    week4 streaming data format 下面讲 data lakes schema-on-read: 从数据源读取raw data 直接放到 data lake 里,然后再读到mode ...

  2. Coursera, Big Data 2, Modeling and Management Systems (week 1/2/3)

    Introduction to data management 整个coures 2 是讲data management and storage 的,主要内容就是分布式文件系统,HDFS, Redis ...

  3. Datasets for Data Mining and Data Science

    https://github.com/mattbane/RecommenderSystem http://grouplens.org/datasets/movielens/ KDDCUP-2012官网 ...

  4. SQL data reader reading data performance test

    /*Author: Jiangong SUN*/ As I've manipulated a lot of data using SQL data reader in recent project. ...

  5. Desktop Management Interface & System Management BIOS

    http://en.wikipedia.org/wiki/Desktop_Management_Interface Desktop Management Interface From Wikipedi ...

  6. 【转】浏览器中的data类型的Url格式,data:image/png,data:image/jpeg!

    所谓"data"类型的Url格式,是在RFC2397中 提出的,目的对于一些"小"的数据,可以在网页中直接嵌入,而不是从外部文件载入.例如对于img这个Tag, ...

  7. 初探 spring data(一)--- spring data 概述

    由于自己一个项目要用多到Sql与NoSql两种截然不同的数据结构,但在编程上我希望统一接口API,让不同类型的数据库能在相同的编程接口模式下运作.于是找了一个spring的官网,发现一个spring ...

  8. OCM_第二十天课程:Section9 —》Data Guard _ DATA GUARD 搭建/DATA GUARD 管理

    注:本文为原著(其内容来自 腾科教育培训课堂).阅读本文注意事项如下: 1:所有文章的转载请标注本文出处. 2:本文非本人不得用于商业用途.违者将承当相应法律责任. 3:该系列文章目录列表: 一:&l ...

  9. OCM_第十九天课程:Section9 —》Data Guard _ DATA GUARD 原理/DATA GUARD 应用/DATA GUARD 搭建

    注:本文为原著(其内容来自 腾科教育培训课堂).阅读本文注意事项如下: 1:所有文章的转载请标注本文出处. 2:本文非本人不得用于商业用途.违者将承当相应法律责任. 3:该系列文章目录列表: 一:&l ...

随机推荐

  1. docker save和load将本地镜像上传AWS

    今天在AWS云主机上部署Grafana,发现无法使用私有仓库,于是,尝试了下docker save和docker load.着实很好用,简单记录下: docker save用法: Usage: doc ...

  2. Python列表倒序输出及其效率

    Python列表倒序输出及其效率 方法一 使用Python内置函数reversed() for i in reversed(arr): pass reversed返回的是迭代器,所以不用担心内存问题. ...

  3. 『配置』服务器搭建 Office Online Server2016 实现文档预览 番外 错误篇

    安装一个或多个角色.角色服务或功能失败.找不到源文件.请再次尝试在新的“添加角色和功能”向导会话中安装角色.角色服务或功能,然后在向导的“确认”页中单击“指定备用源路径”以指定安装所需的源文件的有效位 ...

  4. python 深浅拷贝 元组 字典 集合操作

    深浅拷贝 :值拷贝 :ls = [,,] res = ls 则print(res)就是[,,] 浅拷贝 :ls.copy() 深拷贝:ls3 = deepcopy(ls) # 新开辟列表空间,ls列表 ...

  5. EF多租户实例:如何快速实现和同时支持多个DbContext

    前言 上一篇随笔我们谈到了多租户模式,通过多租户模式的演化的例子.大致归纳和总结了几种模式的表现形式. 并且顺带提到了读写分离. 通过好几次的代码调整,使得这个库更加通用.今天我们聊聊怎么通过该类库快 ...

  6. Openwrt 路由器上 安装 svn server

    Openwrt 上也可以搭建 svn 服务了,这样就不用开着 ubuntu 了,省电. 在后台打开 ssh 服务,或者使用 telnet 服务,使用 putty 登录路由器. 如下图所示,这里刷的是 ...

  7. 纯 css column 布局实现瀑布流效果

    原理 CSS property: columns.CSS属性 columns 用来设置元素的列宽和列数. 兼容性 chrome 50+ IE 10+ android browser 2.1+ with ...

  8. unittest测试框架详解

    单元测试的定义 1. 什么是单元测试? ​ 单元测试是指,对软件中的最小可测试单元在与程序其他部分相隔离的情况下进行检查和验证的工作,这里的最小可测试单元通常是指函数或者类,一般是开发来做的,按照测试 ...

  9. Python面向对象之:类空间问题以及类之间的关系

    一. 类的空间问题 1.1 何处可以添加对象属性   class A: def __init__(self,name): self.name = name def func(self,sex): se ...

  10. echart图表中legend不显示问题

    主要是legend中的name要和series中的name对应上.