to get more information about Microsoft SQLServer 2008 R2 you can visit SQLServer2008 R2
to find full comparison of key capabilities between SQL Server 2008 Enterprise, Standard, Workgroup, Web and Express editions
Just click on Microsoft
SQLHero delivered June Session in Business Intelligence Scenarios (SQL 2008)
|Topic||Business Intelligence Scenarios (SQL 2008)|
|Date||June 4, 2009|
|Time||10:00 – 3:00|
|Location||Microsoft, Smart Village|
With the arrival of SQL Server 2008 we gained access to a new feature called Change Data Capture (CDC). CDC enables you to constantly stream change data to a consuming process, most notably Extract, Transform, and Load (ETL) applications. In the SQL Server environment the ETL application used is of course SQL Server Integration Services (SSIS).
the first obvious constraint in regard to SQL Server 2008’s CDC feature is that it is only available to SSIS applications that source their data from an OLTP system built on SQL Server 2008. In the field of implementing Microsoft BI there are many relational database systems we have to source data from that are not SQL Server based (and if they are SQL Server based there is a good chance it is not using the 2008 version). All major relational database systems use some form of a transaction log to record the history of its actions in case of rollback or hardware failure. I would imagine that the actual contents of each database vendor’s log is different, however they have the same basic requirement to temporarily persist database actions.
So how to enable Change Data Capture on a database, and on a table, and how to keep track of Data Definition Language changes on a table.
Let’s create a database named SqlHero as shown below.
USE [master] GO /*** Object: Database [SqlHero] ***/ IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SqlHero') DROP DATABASE [SqlHero] GO USE [master] GO /*** Object: Database [SqlHero] ***/ CREATE DATABASE [SqlHero] GO Step 2
Now let’s create a table named MyTable on the SqlHero database, as shown Below.
USE [SqlHero] GO /*** Object: Table [dbo].[MyTable] ***/ IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTable]') AND type in (N'U')) DROP TABLE [dbo].[MyTable] GO USE [MyDataBase] GO /*** Object: Table [dbo].[MyTable] ***/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MyTable]( [ID] [int] NOT NULL, [Name] [varchar](100) NULL, CONSTRAINT [MyTable_PK] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO SET ANSI_PADDING OFF GO Step 3
In order to track changes on the table, we need to enable the Change Data Capture feature on the database. We can enable the Change Data Capture feature using the following Transact SQL command, as shown below.
Until this point, the only schema that exists on the database is dbo. Once we enable Change Data Capture, a new schema with a bunch of objects will be created.
USE [SqlHero] GO EXEC sys.sp_cdc_enable_db_change_data_capture GO
The following CDC tables are created under the CDC schema, as shown below
cdc.captured_columns cdc.change_tables cdc.ddl_history cdc.index_columns cdc.lsn_time_mapping
When you query these tables, you will see only zero number of rows.
select * from cdc.captured_columns select * from cdc.change_tables select * from cdc.ddl_history select * from cdc.index_columns select * from cdc.lsn_time_mapping
(0 row(s) affected) (0 row(s) affected) (0 row(s) affected) (0 row(s) affected) (0 row(s) affected)
In order to track changes on the table, we need to enable the Change Data Capture feature on the table as well. Let’s enable the Change Data Capture feature using the following Transact SQL command, as shown below.
USE [SqlHero] GO EXEC sys.sp_cdc_enable_table_change_data_capture @source_schema = 'dbo', @source_name = 'MyTable', @role_name = 'cdc_MyTable' GO
Job 'cdc.MyDataBase_capture' started successfully. Job 'cdc.MyDataBase_cleanup' started successfully.
By reading the result, we can easily understand that SQL Server Agent is a must to do the capture and cleanup. We can see these jobs are actually created as SQL Server Scheduled jobs. [Refer Fig 1.6, 1.7]
Now let’s execute the following Transact SQL to see if any data has been inserted on any of the CDC tables.
select * from cdc.captured_columns select * from cdc.change_tables select * from cdc.index_columns
object_id, column_name, column_id, column_type, column_ordinal, is_computed 389576426, ID, 1, int, 1, 0 389576426, Name, 2, varchar, 2, 0 (2 row(s) affected) object_id, version, source_object_id, capture_instance, start_lsn, end_lsn, supports_net_changes, has_drop_pending, role_name, index_name, filegroup_name, create_date 389576426, 0, 53575229, dbo_MyTable, NULL, NULL, 0, NULL, cdc_MyTable, MyTable_PK, NULL, (1 row(s) affected) object_id, column_name, index_ordinal, column_id 389576426, ID, 1, 1 (1 row(s) affected)
You can see that the CDC schema stores metadata information about which tables and columns are being tracked by Change Data Capture. It also stores information about what Index the table that has been tracked has.
We can check to see if Change Data Capture is enabled on a database by using the following transact SQL statement.
SELECT is_cdc_enabled FROM sys.databases WHERE name = 'SqlHero'
is_cdc_enabled -------------- 1 (1 row(s) affected)
We can check to see if Change Data Capture is enabled on a table, by using the following transact SQL statement.
SELECT is_tracked_by_cdc FROM sys.tables WHERE name = 'MyTable'
is_tracked_by_cdc ----------------- 1 (1 row(s) affected)
Now let’s make some changes to table structure and see if Change Data Capture captures the changes. Execute the following query as shown below.
Alter Table MyTable add Address varchar(500)
Alter Table MyTable add Salary money
Alter Table MyTable add Bonus money
Query the cdc table ddl_history as shown below.
select * from cdc.ddl_history
source_object_id, object_id, required_column_update, ddl_command, ddl_lsn, ddl_time 53575229, 389576426, 0, Alter Table MyTable add Address varchar(500) , 0x0000001C000001350001, 53575229, 389576426, 0, Alter Table MyTable add Salary money , 0x0000001C000001370018, 53575229, 389576426, 0, Alter Table MyTable add Bonus money , 0x0000001C0000013D0018, (3 row(s) affected)
Policy-Based Management facet
A set of logical properties that model the behavior or characteristics for certain types of managed targets. The number and characteristics of the properties are built into the facet and can be added or removed by only the maker of the facet. A target type can implement one or more management facets, and a management facet can be implemented by one or more target types. Some properties of a facet can only apply to a specific version. For example, the Database Mail property of the Surface Area Configuration facet only applies to SQL Server 2005 and later versions.
Policy-Based Management condition
A Boolean expression that specifies a set of allowed states of a Policy-Based Management managed target with regard to a management facet.
Policy-Based Management policy
A Policy-Based Management condition and the expected behavior, for example, evaluation mode, target filters, and schedule. A policy can contain only one condition. Policies can be enabled or disabled.
1) Create New Condition
2)Create New Policy
Now try to create any table in any database if you start table by tbl creation will work otherwise creation will through exception
We’re all feeling the financial crunch these days, and an increasing number of companies are trying to figure out how to get the most out of their IT dollars. One of the cost-saving ideas being floated around IT departments is to extend the replacement cycle of hardware. This can be done by extending your hardware manufacture warranty for one or more years. But extending the hardware lifecycle can only do so much. What can you do when you are running low on hardware resources like disk space?
SQL Server 2008 introduces some new features to help deal with disk capacity issues without making IT go out and buy more disk drives. Some of SQL Server’s new compression options can help you reclaim some of your valuable disk space and postpone the purchase of additional disk capacity for a while, maybe even years. In this article, I will discuss table and backup compression options available in SQL Server 2008 to help you reduce your database disk space usage.
Data Compression was first introduced in SQL Server 2005 with Service Pack 2 where Microsoft introduced a new storage format called vardecimal to store decimal and numeric data types as variable length columns. In SQL Server 2008 this concept is applicable even to fixed length data types such as integer, char and float. The main advantage of using Data Compression is that it reduces the storage cost and increases the query performance by reducing I/O. Database Administrator needs to be very careful before using Database Compression as it is more CPU intensive activity. If you are not facing CPU crunch and you have more I/O based activities then it is a good idea to implement data compression, which can give you better query performance
Compressing Tables and Indexes
SQL Server 2008 allows you to compress both tables and indexes. Compression happens at the row or page level. No application changes are required to implement table or index compression. Data compression is supported for the following table objects:
- Tables stored as a heap
- Tables stored as a clustered index
- Non-clustered indexes
- Indexes views
- Each partition of a partitioned table
When a clustered index is compressed the non-clustered indexes are not automatically compressed. You need to set up compression on non-clustered indexes when they are created or altered. System tables cannot be compressed.
Row and Page Compression
What is the difference between Row and Page level compression? Row compression happens on each individual row, where as Page compressions considers all row in a page and compresses a page based on common strings of data throughout a page.
Row compression takes all fixed length fields and compresses them by making them variable length fields. When I say fixed length fields I don’t mean only character data, but also numeric data. The numeric data type fields are evaluated, and are compressed into the smallest possible amount of space. For instance, a BIGINT column normally takes 8 bytes. But if a BIGINT column only contains, say, the value “1” then it would be compressed to use only 1 byte, instead of 8 bytes. In addition to compressing these fixed length fields row compression reduces the space required for metadata in each row.
Page level compression looks at an entire page when it does compression and compresses common strings of data. This is done by using row compression plus prefix and dictionary compression techniques. Prefix compression is a process of removing commonly occurring prefixes from column values. These prefixes are placed in a Compression Information (CI) structure in the header of the page, and then the column values are changed so they reference the CI information in the header. Dictionary compression is done after prefix compression. Dictionary compression is the process of finding repeating occurrences of values in the page and then placing these values in the CI structure and replacing the values with appropriate index to the CI information.
The diagrams below, taken from SQL Server 2008 Books Online, visually represent how prefix and dictionary compression are done at the page level.
|Page before compression||Page after Prefix Compression||Page after Dictionary Compression|
When the first row is added to a page, page compression is not performed, only row compression is done. When the second row is added Page compression is undertaken. If the SQL Server engine determines that the space saving of page compression is not significant then page compress is not performed.
By using Row and Page compression SQL Server can save considerable amount of disk space. Let’s look at examples of how to set up compression.
Considerations for Compression
There are a few things you need to know about compression. First, compression is only available in the Enterprise and Developer editions of SQL Server 2008.
Compressing data allows more rows to be included in a page. For each I/O performed by the database engine the more rows of data are being read or written. Keep in mind that just because you are compressing rows, the maximum row size remains the same. SQL Server will not enable a table for compression if the maximum row size and the compression overhead space combined are greater than 8060. This is because SQL Server needs to ensure that a row can be stored in a single page even if each column is filled to it maximum capacity. If your table has multiple partitions you can enable data compression by partition, not all partitions need to be compressed. Indexes are not automatically compressed just because you compressed the table. You need to explicitly compress indexes if you want them compressed. There is one exception to this rule. If you define your table to use heap storage that is compressed, then when you create a clustered index for your table it will inherit the compression settings from the heap. If you change the compression settings of a heap, then all non-clustered index on that heap need to be rebuilt. This is so the row pointers in the non-clustered index can point to the new location for the row in the heap. If you store large record type data outside the row the large record data is not compressed.
Compressing data does have some impact on the internal workings of SQL Server. When a page is written to disk it is compressed and when it is read back it is uncompressed. This means additional CPU overhead is used to do this compression, which might be a concern if your machine does not have extra CPU capacity to burn.
How to Estimate Compression Savings
Microsoft has provided a stored procedure (SP) named “sp_estimate_data_compression_savings.” This SP helps you identify how much disk space will be saved if you implement compression on a table. With this SP you identify the object you want to compress and the type of compression you what to perform. The SP then takes your object and creates it in “tempdb” and then compresses it to determine how much space a compressed table will take. The amount of used space for the original table and the expected compressed file size are output from this SP. By using this SP you can determine which tables and compression types are best for your environment. Here is an example execution of this SP:
@schema_name = ‘Production’
, @object_name = ‘TransactionHistory’
, @index_id = null
, @partition_number = null
, @data_compression = ‘ROW’;
In this example I’m checking how much space I can save by compressing the AdventureWorks200.Production.TransactionHistory table and all of its indexes using ROW compression. Here is the output I get from this SP:
From this output we can see that the AdventureWorks2008.Production.TransactionHistory clustered index is currently using 6,352 KB. The “sp_estimate_data_compression_savings” SP has identified that by using ROW compression the compressed size for clustered index will be 4,176 KB. This is a space savings of almost 35 percent. As you can see performing ROW compress on the other non-clustered indexes on this table is not so significant.
I can also check how much disk space will be saved if I use PAGE compression on the AdventureWorks.Production.TransactionHistory table by running the following command:
@schema_name = ‘Production’
, @object_name = ‘TransactionHistory’
, @index_id = null
, @partition_number = null
, @data_compression = ‘PAGE’;
When I run this command I get the following output:
Here you can see the space savings by using PAGE compression is considerable more than using ROW compression for each index in the AdventureWorks.Production.TransactionHistory table. By using PAGE level compression you can see that index = 1 (the Clustered Index) saves around 65 percent of the currently used disk space for that index. The two non-clustered indexes do not have a huge amount of space savings, but still save some space is saved by using PAGE
How to Set Up Table Compression
There are a number of different ways to compress a table. Let me show you a few of these methods.
You can download useful tools from this Microsoft System Center to help you in migration from Oracle, Sybase, Access and also for IT Assessments to help you to plan to set your production environments
just visit bellow URL