Tuesday, October 26, 2010

SQL Server 2005 Diagnostic Information Queries

I found this post by Glenn very informative and useful for DBAs.

http://bit.ly/cz6fCO

-----------

-- SQL Version information for current instance
SELECT @@VERSION AS [SQL Version Info];
--   SQL 2005 SP2 is now an "unsupported service pack"
--   SQL 2005 SP2 Builds             SQL 2005 SP3 Builds
-- Build       Description        Build       Description
-- 3042        SP2 RTM             4035        SP3 RTM
-- 3161        SP2 CU1              4207        SP3 CU1
-- 3175        SP2 CU2              4211        SP3 CU2
-- 3186        SP2 CU3              4220        SP3 CU3        
-- 3200        SP2 CU4              4226        SP3 CU4        
-- 3215        SP2 CU5              4230        SP3 CU5         
-- 3228        SP2 CU6              4266        SP3 CU6       
-- 3239        SP2 CU7              4273        SP3 CU7       
-- 3257        SP2 CU8              4285        SP3 CU8
-- 3282        SP2 CU9              4294        SP3 CU9
-- 3294        SP2 CU10            4305        SP3 CU10
-- 3301        SP2 CU11            4309        SP3 CU11
-- 3315        SP2 CU12            4311        SP3 CU12
-- 3325        SP2 CU13
-- 3328        SP2 CU14
-- 3330        SP2 CU15
-- 3355        SP2 CU16
-- 3356        SP2 CU17
-- SP2 Branch is "retired"

-- Hardware Information for SQL Server 2005
-- (Cannot distinguish between HT and multi-core)
SELECT cpu_count AS [Logical CPU Count], hyperthread_ratio AS [Hyperthread Ratio],
cpu_count/hyperthread_ratio AS [Physical CPU Count],
physical_memory_in_bytes/1048576 AS [Physical Memory (MB)]
FROM sys.dm_os_sys_info;

-- Get configuration values for instance
SELECT name, value, value_in_use, [description]
FROM sys.configurations
ORDER BY name ;
-- Focus on
-- clr enabled (only enable if you need it)
-- lightweight pooling (should be zero)
-- max degree of parallelism
-- max server memory (MB)
-- priority boost (should be zero)

-- File Names and Paths for TempDB and all user databases in instance
SELECT DB_NAME([database_id])AS [Database Name], [file_id],
       name, physical_name, type_desc
FROM sys.master_files
WHERE [database_id] > 4 AND [database_id] <> 32767
OR [database_id] = 2;
-- Things to look at:
-- Are data files and log files on different drives?
-- Is everything on C: drive?
-- Is TempDB on dedicated drives?
-- Are there multiple data files?

-- Calculates average stalls per read, per write, and per total input/output for each database file.
SELECT DB_NAME(fs.database_id) AS [Database Name], mf.physical_name, io_stall_read_ms, num_of_reads,
CAST(io_stall_read_ms/(1.0 + num_of_reads) AS NUMERIC(10,1)) AS [avg_read_stall_ms],io_stall_write_ms,
num_of_writes,CAST(io_stall_write_ms/(1.0+num_of_writes) AS NUMERIC(10,1)) AS [avg_write_stall_ms],
io_stall_read_ms + io_stall_write_ms AS [io_stalls], num_of_reads + num_of_writes AS [total_io],
CAST((io_stall_read_ms + io_stall_write_ms)/(1.0 + num_of_reads + num_of_writes) AS NUMERIC(10,1))
AS [avg_io_stall_ms]
FROM sys.dm_io_virtual_file_stats(null,null) AS fs
INNER JOIN sys.master_files AS mf
ON fs.database_id = mf.database_id
AND fs.[file_id] = mf.[file_id]
ORDER BY avg_io_stall_ms DESC;
-- Helps determine which database files on the entire instance have the most I/O bottlenecks

-- Recovery model, log reuse wait description, and compatibility level for all databases on instance
SELECT [name], recovery_model_desc, log_reuse_wait_desc, [compatibility_level]
FROM sys.databases;
-- Things to look at
-- How many databases are on the instance?
-- What recovery models are they using?
-- What is the log reuse wait description?
-- What compatibility level are they on?

-- Clear Wait Stats
-- DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR);
-- Isolate top waits for server instance since last restart or statistics clear
WITH Waits AS
(SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
FROM sys.dm_os_wait_stats
WHERE wait_type NOT IN ('CLR_SEMAPHORE','LAZYWRITER_SLEEP','RESOURCE_QUEUE','SLEEP_TASK'
,'SLEEP_SYSTEMTASK','SQLTRACE_BUFFER_FLUSH','WAITFOR', 'LOGMGR_QUEUE','CHECKPOINT_QUEUE'
,'REQUEST_FOR_DEADLOCK_SEARCH','XE_TIMER_EVENT','BROKER_TO_FLUSH','BROKER_TASK_STOP','CLR_MANUAL_EVENT'
,'CLR_AUTO_EVENT','DISPATCHER_QUEUE_SEMAPHORE', 'FT_IFTS_SCHEDULER_IDLE_WAIT'
,'XE_DISPATCHER_WAIT', 'XE_DISPATCHER_JOIN', 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP'))
SELECT W1.wait_type,
CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
FROM Waits AS W1
INNER JOIN Waits AS W2
ON W2.rn <= W1.rn
GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
HAVING SUM(W2.pct) - W1.pct < 99; -- percentage threshold
-- Common Significant Wait types with BOL explanations
-- *** Network Related Waits ***
-- ASYNC_NETWORK_IO        Occurs on network writes when the task is blocked behind the network
-- *** Locking Waits ***
-- LCK_M_IX                Occurs when a task is waiting to acquire an Intent Exclusive (IX) lock
-- LCK_M_IU                Occurs when a task is waiting to acquire an Intent Update (IU) lock
-- LCK_M_S                Occurs when a task is waiting to acquire a Shared lock
-- *** I/O Related Waits ***
-- ASYNC_IO_COMPLETION  Occurs when a task is waiting for I/Os to finish
-- IO_COMPLETION        Occurs while waiting for I/O operations to complete.
--                      This wait type generally represents non-data page I/Os. Data page I/O completion waits appear
--                      as PAGEIOLATCH_* waits
-- PAGEIOLATCH_SH        Occurs when a task is waiting on a latch for a buffer that is in an I/O request.
--                      The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem.
-- PAGEIOLATCH_EX        Occurs when a task is waiting on a latch for a buffer that is in an I/O request.
--                      The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem.
-- WRITELOG             Occurs while waiting for a log flush to complete.
--                      Common operations that cause log flushes are checkpoints and transaction commits.
-- PAGELATCH_EX            Occurs when a task is waiting on a latch for a buffer that is not in an I/O request.
--                      The latch request is in Exclusive mode.
-- BACKUPIO                Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data
-- *** CPU Related Waits ***
-- SOS_SCHEDULER_YIELD  Occurs when a task voluntarily yields the scheduler for other tasks to execute.
--                      During this wait the task is waiting for its quantum to be renewed.
-- THREADPOOL            Occurs when a task is waiting for a worker to run on.
--                      This can indicate that the maximum worker setting is too low, or that batch executions are taking
--                      unusually long, thus reducing the number of workers available to satisfy other batches.
-- CX_PACKET            Occurs when trying to synchronize the query processor exchange iterator
--                        You may consider lowering the degree of parallelism if contention on this wait type becomes a problem

-- Signal Waits for instance
SELECT CAST(100.0 * SUM(signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%signal (cpu) waits],
       CAST(100.0 * SUM(wait_time_ms - signal_wait_time_ms) / SUM (wait_time_ms) AS NUMERIC(20,2)) AS [%resource waits]
FROM sys.dm_os_wait_stats;

-- Signal Waits above 10-15% is usually a sign of CPU pressure

-- Get Average Task Counts (run multiple times)
SELECT AVG(current_tasks_count) AS [Avg Task Count],
AVG(runnable_tasks_count) AS [Avg Runnable Task Count],
AVG(pending_disk_io_count) AS [AvgPendingDiskIOCount]
FROM sys.dm_os_schedulers WITH (NOLOCK)
WHERE scheduler_id < 255;
-- Sustained values above 10 suggest further investigation in that area

-- Get CPU Utilization History (SQL 2005 Only)
DECLARE @ts_now bigint;
SET @ts_now = (SELECT cpu_ticks / CONVERT(float, cpu_ticks_in_ms) FROM sys.dm_os_sys_info);
SELECT TOP(144) SQLProcessUtilization AS [SQL Server Process CPU Utilization],
               SystemIdle AS [System Idle Process],
               100 - SystemIdle - SQLProcessUtilization AS [Other Process CPU Utilization],
               DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS [Event Time]
FROM (
      SELECT record.value('(./Record/@id)[1]', 'int') AS record_id,
            record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')
            AS [SystemIdle],
            record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]',
            'int')
            AS [SQLProcessUtilization], [timestamp]
      FROM (
            SELECT [timestamp], CONVERT(xml, record) AS [record]
            FROM sys.dm_os_ring_buffers
            WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
            AND record LIKE '%%') AS x
      ) AS y
ORDER BY record_id DESC;

-- Page Life Expectancy (PLE) value for default instance
SELECT cntr_value AS [Page Life Expectancy]
FROM sys.dm_os_performance_counters
WHERE OBJECT_NAME = 'SQLServer:Buffer Manager' -- Modify this if you have named instances
AND counter_name = 'Page life expectancy';

-- PLE is a good measurement of memory pressure
-- Higher PLE is better. Below 300 is generally bad.
-- Watch the trend, not the absolute value

-- Buffer Pool Usage for instance
SELECT TOP(20) [type], SUM(single_pages_kb) AS [SPA Mem, Kb]
FROM sys.dm_os_memory_clerks
GROUP BY [type] 
ORDER BY SUM(single_pages_kb) DESC;

-- CACHESTORE_SQLCP  SQL Plans         - These are cached SQL statements or batches that aren't in
--                                     stored procedures, functions and triggers
-- CACHESTORE_OBJCP  Object Plans      - These are compiled plans for stored procedures,
--                                     functions and triggers
-- CACHESTORE_PHDR   Algebrizer Trees  - An algebrizer tree is the parsed SQL text that
--                                     resolves the table and column names

-- Find single-use, ad-hoc queries that are bloating the plan cache
SELECT TOP(100) [text], cp.size_in_bytes
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cp.cacheobjtype = N'Compiled Plan'
AND cp.objtype = N'Adhoc'
AND cp.usecounts = 1
ORDER BY cp.size_in_bytes DESC;
-- Gives you the text and size of single-use ad-hoc queries that waste space in plan cache
-- Enabling 'optimize for ad hoc workloads' for the instance can help (SQL Server 2008 and 2008 R2 only)
-- Enabling forced parameterization for the database can help

-- Switch to user database *******************
--USE YourDatabaseName;
--GO
-- Individual File Sizes and space available for current database
SELECT name AS [File Name] , physical_name AS [Physical Name], size/128.0 AS [Total Size in MB],
size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS [Available Space In MB]
FROM sys.database_files;
-- Look at how large and how full the files are and where they are located
-- Make sure the transaction log is not full!!

-- Cached SP's By Execution Count (SQL 2005)
SELECT TOP(25) qt.[text] AS [SP Name], qs.execution_count AS [Execution Count], 
qs.execution_count/DATEDIFF(Second, qs.creation_time, GETDATE()) AS [Calls/Second],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.total_worker_time AS [TotalWorkerTime],
qs.total_elapsed_time/qs.execution_count AS [AvgElapsedTime],
qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads,
DATEDIFF(Minute, qs.creation_time, GetDate()) AS [Age in Cache]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
WHERE qt.[dbid] = DB_ID() -- Filter by current database
ORDER BY qs.execution_count DESC;

-- Cached SP's By Worker Time (SQL 2005) Worker time relates to CPU cost
SELECT TOP(25) qt.[text] AS [SP Name], qs.total_worker_time AS [TotalWorkerTime],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.execution_count AS [Execution Count],
ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GETDATE()), 0) AS [Calls/Second],
ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS [AvgElapsedTime],
qs.max_logical_reads, qs.max_logical_writes,
DATEDIFF(Minute, qs.creation_time, GETDATE()) AS [Age in Cache]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
WHERE qt.[dbid] = DB_ID() -- Filter by current database
ORDER BY qs.total_worker_time DESC;

-- Cached SP's By Logical Reads (SQL 2005) Logical reads relate to memory pressure
SELECT TOP(25) qt.[text] AS [SP Name], total_logical_reads, qs.max_logical_reads,
total_logical_reads/qs.execution_count AS [AvgLogicalReads], qs.execution_count AS [Execution Count],
qs.execution_count/DATEDIFF(Second, qs.creation_time, GETDATE()) AS [Calls/Second],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.total_worker_time AS [TotalWorkerTime],
qs.total_elapsed_time/qs.execution_count AS [AvgElapsedTime],
qs.total_logical_writes,
qs.max_logical_writes, qs.total_physical_reads,
DATEDIFF(Minute, qs.creation_time, GETDATE()) AS [Age in Cache]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
WHERE qt.[dbid] = DB_ID() -- Filter by current database
ORDER BY total_logical_reads DESC;

-- Top Cached SPs By Total Logical Writes (SQL 2005)
-- Logical writes relate to both memory and disk I/O pressure
SELECT TOP(25) qt.[text] AS [SP Name], qs.total_logical_writes, qs.max_logical_writes,
qs.total_logical_writes/qs.execution_count AS [AvgLogicalWrites], qs.execution_count AS [Execution Count],
qs.execution_count/DATEDIFF(Second, qs.creation_time, GETDATE()) AS [Calls/Second],
qs.total_worker_time/qs.execution_count AS [AvgWorkerTime],
qs.total_worker_time AS [TotalWorkerTime],
qs.total_elapsed_time/qs.execution_count AS [AvgElapsedTime],
qs.total_physical_reads,
DATEDIFF(Minute, qs.creation_time, GETDATE()) AS [Age in Cache]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS qt
WHERE qt.[dbid] = DB_ID() -- Filter by current database
ORDER BY total_logical_writes DESC;

-- Lists the top statements by average input/output usage for the current database
SELECT TOP(50) OBJECT_NAME(qt.objectid) AS [SP Name],
(qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO],
SUBSTRING(qt.[text],qs.statement_start_offset/2,
    (CASE
        WHEN qs.statement_end_offset = -1
     THEN LEN(CONVERT(nvarchar(max), qt.[text])) * 2
        ELSE qs.statement_end_offset
     END - qs.statement_start_offset)/2) AS [Query Text]   
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE qt.[dbid] = DB_ID()
ORDER BY [Avg IO] DESC;
-- Helps you find the most expensive statements for I/O by SP

-- Possible Bad Indexes (writes > reads)
SELECT OBJECT_NAME(s.[object_id]) AS [Table Name], i.name AS [Index Name], i.index_id,
        user_updates AS [Total Writes], user_seeks + user_scans + user_lookups AS [Total Reads],
        user_updates - (user_seeks + user_scans + user_lookups) AS [Difference]
FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
ON s.[object_id] = i.[object_id]
AND i.index_id = s.index_id
WHERE OBJECTPROPERTY(s.[object_id],'IsUserTable') = 1
AND s.database_id = DB_ID()
AND user_updates > (user_seeks + user_scans + user_lookups)
AND i.index_id > 1
ORDER BY [Difference] DESC, [Total Writes] DESC, [Total Reads] ASC;

-- Missing Indexes for entire instance by Index Advantage
SELECT user_seeks * avg_total_user_cost * (avg_user_impact * 0.01) AS [index_advantage], migs.last_user_seek,
mid.[statement] AS [Database.Schema.Table],
mid.equality_columns, mid.inequality_columns, mid.included_columns,
migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact
FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)
INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)
ON mig.index_handle = mid.index_handle
ORDER BY index_advantage DESC;

-- Look at last user seek time, number of user seeks to help determine source and importance
-- SQL Server is overly eager to add included columns, so beware

-- Breaks down buffers used by current database by object (table, index) in the buffer cache
SELECT OBJECT_NAME(p.[object_id]) AS [ObjectName],
p.index_id, COUNT(*)/128 AS [buffer size(MB)],  COUNT(*) AS [buffer_count]
FROM sys.allocation_units AS a
INNER JOIN sys.dm_os_buffer_descriptors AS b
ON a.allocation_unit_id = b.allocation_unit_id
INNER JOIN sys.partitions AS p
ON a.container_id = p.hobt_id
WHERE b.database_id = DB_ID()
AND p.[object_id] > 100
GROUP BY p.[object_id], p.index_id
ORDER BY buffer_count DESC;

-- Tells you what tables and indexes are using the most memory in the buffer cache

-- Detect blocking (run multiple times)
SELECT t1.resource_type AS [lock type],DB_NAME(resource_database_id) AS [database],
t1.resource_associated_entity_id AS [blk object],t1.request_mode AS [lock req],  --- lock requested
t1.request_session_id AS [waiter sid], t2.wait_duration_ms AS [wait time],       -- spid of waiter 
(SELECT [text] FROM sys.dm_exec_requests AS r                                    -- get sql for waiter
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle])
WHERE r.session_id = t1.request_session_id) AS [waiter_batch],
(SELECT SUBSTRING(qt.[text],r.statement_start_offset/2,
    (CASE WHEN r.statement_end_offset = -1
    THEN LEN(CONVERT(nvarchar(max), qt.[text])) * 2
    ELSE r.statement_end_offset END - r.statement_start_offset)/2)
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_sql_text(r.[sql_handle]) AS qt
WHERE r.session_id = t1.request_session_id) AS [waiter_stmt],    -- statement blocked
t2.blocking_session_id AS [blocker sid],                         -- spid of blocker
(SELECT [text] FROM sys.sysprocesses AS p                        -- get sql for blocker
CROSS APPLY sys.dm_exec_sql_text(p.[sql_handle])
WHERE p.spid = t2.blocking_session_id) AS [blocker_stmt]
FROM sys.dm_tran_locks AS t1
INNER JOIN sys.dm_os_waiting_tasks AS t2
ON t1.lock_owner_address = t2.resource_address;

-- When were Statistics last updated on all indexes?
SELECT o.name, i.name AS [Index Name], 
       STATS_DATE(i.[object_id], i.index_id) AS [Statistics Date],
       s.auto_created, s.no_recompute, s.user_created
FROM sys.objects AS o WITH (NOLOCK)
INNER JOIN sys.indexes AS i WITH (NOLOCK)
ON o.[object_id] = i.[object_id]
INNER JOIN sys.stats AS s WITH (NOLOCK)
ON i.[object_id] = s.[object_id]
AND i.index_id = s.stats_id
WHERE o.[type] = 'U'
ORDER BY STATS_DATE(i.[object_id], i.index_id) ASC;   
-- Helps discover possible problems with out of date statistics
-- Also gives you an idea which indexes are most active

Monday, August 16, 2010

SharePoint 2010 Visio Services

Visio Services in SharePoint 2010 is making Data-Driven visualizations easier to create than ever before. Dave McMahon has generously provided an excellent introduction to get us up to speed, complete with a step-by-step guide to getting started.

Complete article: http://bit.ly/dCVmMx

Friday, July 30, 2010

Migrating Users to SharePoint 2010 using Visual Upgrade

SharePoint 2010 introduces a new UI that is a clear and welcome departure from the outdated UI supplied by SharePoint Server 2007. The new SharePoint 2010 UI leaves behind the unnecessary postbacks, the confusing page transitions, and the table-based page layouts of SharePoint 2007. The result is a much improved user experience with a server-side Ribbon and a wealth of new client-side behavior that's able to execute many users' commands behind the scenes using asynchronous callbacks to the Web server.

More ...

Sunday, March 21, 2010

How to Implement SCRUM in 10 easy steps

Scrum is an iterative, incremental framework for agile software development. Although Scrum was intended for management of software development projects, it can be used to run software maintenance teams, or as a general project/program management approach.

There are several implementations of systems for managing the Scrum process, which range from yellow stickers and whiteboards, to software packages. One of Scrum’s biggest advantages is that it is very easy to learn and requires little effort to start using.

Here's one nicely plotted at http://bit.ly/bGQNd1

Monday, March 1, 2010

Hands-on Entity Framework

Object-relational mapping or O/RM is a technique which creates a virtual database you can use from inside of your application, where, instead of tables of “things”, there are collections of “objects”. And an object can be a Person, a Car, a Record Album, an Invoice, whatever it is your application is working with. For example, a Record Album has a collection, or list, of tracks. And, for the most part, only has one Artist or Band. On the flip side, an Artist or Band has a list of Record Albums they have recorded. At its heart, what an O/RM does is to map database tables, and its relations, into a set of objects, or classes. This allows you to think in the context of your application, instead of how to make the data fit correctly into the tables.

Originally posted in simple-talk, click here to read more...

Monday, February 8, 2010

Laying out SQL Code

It is important to ensure that SQL code is laid out the best way for the team that has to use and maintain it. Before you work out how to enforce a standard, one has to work out what that standard should be for the application. So do you dive into detail or create an overall logic to the way it is done?

Click here for the full article

Monday, February 1, 2010

Successfully Closeout a Project

Project Closeout is sometimes perceived as the least important of all of the project phases, but its value to future projects cannot be underestimated. The purpose of Project Closeout is to assess the project and derive any lessons learned and best practices to be applied to future projects.

To explore more, click here: http://bit.ly/9W3oqs

Friday, January 29, 2010

Office 2010 and SharePoint 2010 VMs

2010 Information Worker Demonstration Virtual Machine (Beta) is a set of Virtual Machines focused on Office 2010 and SharePoint 2010. Although the VMs bring to the table a range of technologies, fact is that the downloads are intended to simplify the evaluation and demonstration of the next iteration of Office and SharePoint. 


“Virtual machine “a” contains the following pre-configured software: 

  1. Windows Server 2008 SP2 Standard Edition x64, running as an Active Directory Domain Controller for the “CONTOSO.COM” domain with DNS and WINS 
  2. Microsoft SQL Server 2008 SP2 Enterprise Edition with Analysis, Notification, and Reporting Services 
  3. Microsoft Office Communication Server 2007 R2 
  4. Visual Studio 2010 Beta 2 Ultimate Edition 
  5. Microsoft SharePoint Server 2010 Enterprise Edition Beta 2 
  6. Microsoft Office Web Applications Beta 2 
  7. FAST Search for SharePoint 2010 Beta 2 
  8. Microsoft Project Server 2010 Beta 2 
  9. Microsoft Office 2010 Beta 2 
  10. Microsoft Office Communicator 2007 R2,” Microsoft revealed. 

“Virtual machine “b” contains the following pre-configured software: 

  1. Windows Server 2008 R2 Standard Evaluation Edition x64, joined to the “CONTOSO.COM” domain 
  2. Microsoft Exchange Server 2010 Active directory has been preconfigured over 200 “demo” users with metadata in an organizational structure. All of these user profiles have been imported and indexed for search within SharePoint Server 2010, with “contoso\administrator” granted administrator permissions. SharePoint Server 2010 has been configured in a “Complete” farm using Kerberos authentication and the default SQL Server 2008 instance for data, and has a site collection created using the Team Site template and a FAST Search Center,” the company added. 

Downloads link: http://bit.ly/bM3TjF

Saturday, January 23, 2010

Version 2 of PEAF (Pregmatic Enterprise Architecture Framework)


A vendor and consultancy independent, technology neutral, Enterprise Architecture Framework which allows organisations to kick start or re-start an EA initiative and provides a comprehensive set of Products and Processes of everything required to hit the ground running.
PEAF bridges the gap between minimal "EA Frameworks" which do not provide a complete set of what is required or huge monolithic “EA Frameworks” that are too difficult to understand and difficult to use.
At the highest level, when compared to existing “EA Frameworks”, PeaF is smaller and less complex, whilst being easier to use and covering more of the scope of EA.

Saturday, January 9, 2010

9 essential responsibilities of a Product Owner

The role of Product Owner is demanding although not always well defined. This article at Knowledgegenes explores 9 essential responsibilities that a Product Manager must embrace.


http://www.knowledgegenes.com/home.aspx?kgid=10102

Thursday, January 7, 2010

Tunnelpoint: The BDC for SharePoint in the Cloud

A hosted service: connects external services (for example SOAP web services) to SharePoint List data in SharePoint Site.

Tuesday, January 5, 2010

Self Service BI in Excel 2010 and Sharepoint

Excel is no doubt a great tool in large and small bussinesses. Excel’s ease of use lets users build their own tool, with or without the help of a IT expert. In my organization, Excel spreadsheets play essential role in the day-to-day functions of the business, such as trading in the stock market and managing their portfolios, calculating complex financial formulae, etc. However, these spreadsheets are fragile, do not adopt to changing business requirements, and need constant maintenance. And the scary thing is that only one person (in most cases) — the spreadsheet’s creator—really understand how the application works. Sometimes this might make the creator a bottleneck to the business workflow. In worst case scenarios, business function could halt if the creator were unavailable for some reason or if the application were corrupted or accidentally deleted, and could be very serious, especially if there were a major problem restoring the application.


PowerPivot for Excel 2010 in SQL Server 2008 R2 managed self-service business intelligence (BI)  (formerly known by its code name “Gemini”) [ more about PowerPivot at http://www.powerpivot.com ] can handle such situations. It empowers both IT and business users - the business users get to solve business problems with his/her familiar Excel and the IT team has the ability to centrally store, secure, and distribute the Excel spreadsheets throughout the organization using SharePoint.


The Excel component in Office 2010 uses a new storage engine which is built on the SQL Server Analysis Services (SSAS) OLAP engine. The new engine uses extreme data compression, and it enables Excel to work with hundreds of millions of rows with subsecond response time. 


The following blog post describes how to use PowerPivot in Excel 2010.
http://blogs.msdn.com/excel/archive/2009/10/23/using-powerpivot-with-excel-2010.aspx