SSAS Performance – Best Practices and Performance Optimization

Optimizations of SSAS Cubes

SSAS – Best Practices and Performance Optimization – Part 1 of 4

http://www.mssqltips.com/sqlservertip/2565/ssas–best-practices-and-performance-optimization–part-1-of-4/

Performance optimization techniques for source system design and network settings

1. To avoid more locks or lock escalations, you can specify the TABLOCK query hint while querying or ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF when creating tables or indexes or pull data from a read only database.

2. Sometimes when you need to aggregate fact data at the source before pulling the data you could improve performance if you create indexed (materialized) views for this and instead of doing aggregations every time, pull the data from the indexed view.

3. Make sure you have resources available to SQL Server for serving your data pull requests; you can use RESOURCE GOVERNOR to control the amount of resources available to OLTP and OLAP operations. To learn more about the resource governor click here.

4. Create appropriate indexes on source tables to improve the performance of the query which SSAS fires while processing the cube or while retrieving data from the source. If you have access to the source data, you can use this DMV to identify missing indexes or you can use the Index Tuning Advisor for identifying and creating missing indexes on the source.

5. Consider creating partitions, especially on fact tables, which will improve the performance several folds. (If you have multiple partitions distributed across multiple file groups on multiple drives, then SQL Server can access it in parallel which will be faster)

6. As we all know I/O (Input/Output) is the slowest part of the hardware resources. If I/O is a bottleneck on your source system, you should consider using Data Compression which reduces I/O, but increases CPU cycle a bit (more CPU cycles are used for data compression and decompression). SQL Server 2008 and later versions support both row and page compression for both tables and indexes. Before you decide to enable compression on a table you can use the sp_estimate_data_compression_savings system stored procedure to understand how much space savings you will get. To learn more about Data Compression click here.

7. When we select data from a table, shared locks are placed on row/key levels. This row/key level locking escalates to page level or table level depending on the amount of rows that are selected. To minimize the amount of effort by SQL Server to manage these locks you can specify the NOLOCK or TABLOCK query hint in the query.

8. While connecting to source data system, use the default ReadCommitted isolation mode in order to avoid extra overhead/copies at the source system.

9. You can specify the maximum number of connections that SSAS can create in parallel to pull data from source systems during cube processing. This really helps in cube processing to run in parallel by creating multiple connections to refresh several dimensions and facts in parallel. The default value for this is 10 and you should consider increasing this if you have a cube with lots of dimensions/facts and your source supports more parallel connections. This will greatly improve the cube processing times.

10. If your source system (SQL Server) and SSAS are both on the same machine, you should consider using the Shared Memory net library for better performance. (The performance benefit comes from the fact that it bypasses the physical network stack. It uses the Windows Shared Memory feature to communicate between SQL Server and the client/SSAS. This Net-Library is enabled by default and used when you specify either a period or (local) as your machine name or localhost or machine name or by prefixing machine\instance name with lpc: when connecting to a SQL Server instance. To learn more this click here.)

11. During cube processing data moves from your relational data warehouse to SSAS in TDS (Tabular Data Stream) packets. As data movement between the relational data warehouse and SSAS is normally high, we should configure this to have a bigger packet size (therefore less packets) than using a smaller size (high number of packets) to minimize the overhead of breaking data down into multiple chunks/packets and reassembling it at other end. (To change the packet size you can go to connection manager, click on the All page on the left side and specify 32KB for the packet size property instead of its default value of 4KB as shown below. Please note, changing the network packet size property might be good for data warehousing scenario but not for OLTP type applications and therefore it’s better to override the packet size property for your connection separately instead of changing it on SQL Server for all connections.)

Best practices and performance optimization techniques for cube design and development

Dimension Design

1. Include only those columns in dimension which are required by the business.

Including unnecessary columns puts extra overhead on SSAS for managing/storage of these columns and takes longer for processing and querying.

2. Define attribute relationships or cascading attribute relationships.

By default all attributes are related to the key attribute, so define attribute relationships wherever applicable. For example, days roll up into months, months roll up into quarters, quarters roll up into years, etc… This makes queries faster, since it has aggregated 4 quarters or 12 months of data to arrive at yearly figures instead of having to aggregate 365 days. Make sure you don’t create redundant attribute relationships, for example "days roll up into month" and "months roll up into quarter" and also "days roll up into quarter" because this would add extra overhead.

3. Specify the appropriate attribute relationship type

By default an attribute relationship is considered Flexible, but wherever applicable make it Rigid for better performance. If you make it rigid, SSAS doesn’t bother updating members of a dimension on subsequent processing and hence improves the performance. Please make sure you are changing relationships to rigid only in cases where it does not change or else you may get exceptions during processing.

4. Turn Off the Attribute Hierarchy and Use Member Properties

Set AttributeHierarchyEnabled to False for all those attributes ( like Address or List Price etc.) for which you don’t need aggregation to be calculated and want them to access it as member properties. Setting the AttributeHierarchyEnabled property improves the processing performance and also reduces the overall cube size as those attributes will not be considered in aggregation and for index creation. This makes sense for all those attributes which have high cardinality or one to one relationships with a key attribute and which are not used for slicing and dicing; for example Address, Phone Numbers, etc…

5. Appropriately set KeyColumns property

Ensure that the Keycolumns property is set to identify unique values; for example, a month value of 1 is insufficient if the dimension contains more than a single year…so in this case combine Year and Month columns together to make them unique or key columns.

6. Setting AttributeHierarchyOptimizedState property to Not Optimized

During processing of the primary key attribute, bitmap indexes are created for every related attribute. Building the bitmap indexes for the primary key can take time if it has one or more related attributes with high cardinality (for example Address or Phone number or List price). At query time, the bitmap indexes for these attributes are not useful in speeding up retrieval, since the storage engine still must sift through a large number of distinct values to reach the desired values. Unwanted bitmap indexes increase processing time, increase the cube size as well as they may have a negative impact on query response time. To avoid spending time building unnecessary bitmap indexes during processing set the AttributeHierarchyOptimizedState property to Not Optimized.

7. Creating user defined hierarchies

You should consider creating user defined hierarchies whenever you have a chain of related attributes in a dimension as that would be a navigation path for end users. You should create at least one user defined hierarchy in a dimension which does not contain a parent-child hierarchy. Please make sure your lower level attribute contains more members than the members of the attribute above it, if this is not a case then your level might be in the wrong order.

8. AttributeHierarchyVisible property of an attribute

Although it does not impact performance, it’s recommended to set AttributeHierarchyVisible to FALSE for all those attributes which have been included in user defined hierarchies, this removes the ambiguous (duplicity) experience to end users.

9. Defining default member

By default "All member" is considered as a default member for an attribute and hence its recommended to define a default member for an attribute especially in the case where the attribute cannot be aggregated.

Measure Group Design and Optimization

1. Partitioning the measure groups

Apply a partitioning strategy for all the measure groups (especially those which are quite large in size) and partition them by one or more dimensions as per usage. This will greatly improve the cube processing as well as query performance of the cube.

The processing and query performance improves because of the fact that multiple threads can work together on multiple partitions of a measure group in parallel for processing or for serving query response. You can even define a different aggregation strategy for each partition. For example, you might have a higher percentage aggregation for all those older partitions which are less likely to change whereas a lower percentage of aggregations for those recent partitions which are more likely to change.

(SQL Server 2012 Analysis Services Partitioning Performance Demonstration

http://www.mssqltips.com/sqlservertip/3012/sql-server-2012-analysis-services-partitioning-performance-demonstration/?utm_source=dailynewsletter&utm_medium=email&utm_content=headline&utm_campaign=20130823

)

2. Aggregation

Define the aggregation prudently for the measure groups as aggregations reduce the number of values that SSAS has to scan from the disk to generate the response. While having more (all required) aggregations improves the query performance it will be too slow during cube processing whereas if you have too few aggregations it slows down the query performance, but increases the processing performance. Ideally you should start with 20%-30% query performance improvement and can then use the Usage Based Optimization wizard to define more aggregations as discussed below. If you have created partitions on measure groups, you might consider having a higher percentage of aggregation for all those older partitions which are less likely to change whereas lower percentage of aggregations for those recent partitions which are more likely to change. You should not create aggregations that are larger than one-third of the size of the fact data.

You can define the fact table source record count in the EstimatedRows property of each measure group, and you can define attribute member counts in the EstimatedCount property of each attribute. This way you can ensure your metadata is up-to-date which will improve the effectiveness of your aggregation design and creation.

3. Usage Based Optimization Wizard – Aggregation redefined

Generally we create aggregations to gain 20%-30% performance in the beginning and the later use the Usage Based Optimization wizard to create more aggregations for all the queries being run against the cube. The idea is you enable logging for queries being run against your cube and then you use the collected information as an input to the Usage Based Optimization wizard for creating aggregations for all or long running queries. To learn more about this click here.

4. AggregationUsage Property

AggregationUsage is a property of an attribute which is used by SSAS to determine if the attribute is an aggregation candidate or not. By default SSAS considers only key attributes and attributes in natural hierarchies for inclusion in aggregations. If you find any other attribute which might be used for slicing and dicing then you should consider setting AggregationUsage to Unrestricted for including it in the aggregation design. Avoid setting AggregationUsage property to FULL for an attribute that has many members. You should not create an aggregation that contains several attributes from the same attribute relationship because the implied attribute’s value can be calculated from the first attribute of the attribute relationship chain.

5. IgnoreUnrelatedDimensions property usage

IgnoreUnrelatedDimensions is a property of the measure group which has a default value of TRUE in which case the measure group displays the current amount even for the dimensions which are not related, which might eventually lead to false interpretation. You should consider setting it to FALSE, so a measure group does not ignore an unrelated dimension and to also not show the current amount.

6. Distinct count measures

Its recommended to have each distinct count measure in a separate measure group for improving performance.

7. Referenced relationship of dimension and measure group

You should consider materializing the reference dimension if both dimensions and the measure group are from the same cube for improving performance.

Cube Processing

When we talk of processing a cube, there are two parts to it, processing data which rebuilds dimensions with attribute store, hierarchy store and fact data store and processing indexes which creates bitmap indexes and defined aggregation. You can execute a single command (ProcessFull) to perform these two operations together or execute separate commands (ProcessData and ProcessIndexes) for each of these operations, this way you can identify how much time each operation is taking.

You might choose to do the full process each time or you might do the full process followed by subsequent incremental processes. No matter what approach you use, SSAS uses job based architecture (creates a controller jobs and many other jobs depending on number of attributes, hierarchies, partitions etc.) for processing dimensions and facts.

Cube Synchronization

Cube processing requires exclusive locks on the objects which are being committed, it means that the object will be unavailable to users during the commit. It also means long running queries against SSAS prevents taking exclusive locks on the objects therefore processing may take longer to complete. To prevent processing and querying interfering with each other you can use a different strategy. You can have a cube (also called processing cube) which gets processed (refreshed with latest set of data from the source) and then another cube (also called querying cube) which gets synchronized with the first cube. The second cube is what users will be accessing. There are several ways to synchronize the second (querying) cube and one of the options is built into the cube synchronization feature.

Cube Synchronization (SSAS database synchronization) synchronizes the destination cube with the source cube with the latest metadata and data. When destination cube is getting synchronized, users can still query destination cube because during synchronization SSAS maintains two copies, one of them gets updated while another one is available for usage. After synchronization SSAS automatically switches the users to the new refreshed copy and drops the outdated one. To learn more about cube synchronization best practices click here.

Cache Warming

If you remember from the SSAS architecture, about which I talked about in Part 1 of this tip series, the Query Processor Cache/Formula Engine Cache caches the calculation results whereas the Storage Engine Cache caches aggregated/fact data being queried. This caching technique helps in improving the performance of queries if executed subsequently or if the response of the other queries can be served from the caches. Now the question is, do we really need to wait for first query to complete or can we run the query on its own (pre-execute) and make the cache ready? Yes we can pre-execute one or more frequently used queries or run the CREATE CACHE statement (this one generally runs faster as it does not include cell values) to load the cache and this is what is called Cache Warming.

As a precautionary note, you should not consider that once a query result is cached it will remain there forever; it might be pushed out by other query results if you don’t have enough space for additional query result caching.

To clear the formula engine cache and storage engine you can execute this XMLA command:

<ClearCache xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
 
</ClearCache>

To initialize the calculation script you can execute this query which returns and caches nothing:
select {} on 0 from [Adventure Works]

Best practices and performance optimization techniques for Server Resources and Reporting Services.

1. Threading or parallel processing in SSAS

SSAS has been designed to perform its operations in parallel and because of this it can create multiple threads to execute multiple requests in parallel. Since creating and destroying threads is an expensive affair, SSAS maintains two sets of worker thread pools to return threads which are currently not being used, so that SSAS can again pick them up for serving other requests. These two pools are called the Query Thread Pool and the Process Thread Pool.

If you remember from the SSAS architecture which I talked about in the first tip of this series, the XMLA listener listens for incoming requests and creates (or pulls out a query thread from the query pool if one is already available) a query thread which checks for data/calculations in the formula engine cache. If required, the XMLA listener creates (or pulls out a process thread from the process pool if one is already available) a process thread which is used to retrieve data from the storage engine cache/disk. The process thread also stores the data in the storage engine cache which it retrieved from the disk whereas the query thread stores the calculations in the formula engine cache to resolve/serve future queries.

ThreadPool\Query\MinThreads and ThreadPool\Query\MaxThreads

ThreadPool\Process\MaxThreads and ThreadPool\Process\MinThreads

OLAP\Process\BufferMemoryLimit

OLAP\Process\AggregationMemoryLimitMin and OLAP\Process\AggregationMemoryLimitMax

DataDir and LogDir

Log\FlightRecorder\Enabled

Scale up or scale out whenever or wherever possible

SQL Performance: Using SQL Server DMVs to Identify Missing Indexes

Reference: http://www.mssqltips.com/sqlservertip/1634/using-sql-server-dmvs-to-identify-missing-indexes/

Using SQL Server DMVs to Identify Missing Indexes

Problem
In a previous tip, Deeper insight into unused indexes for SQL Server, we discussed how to identify indexes that have been created but are not being used or used rarely. Now that I know which indexes I can drop, what is the process to identify which indexes I need to create. I can run the Database Tuning Advisor or examine the query plans, but is there any easier way to determine which indexes may be needed?

Solution
As with a lot of the new features we have seen with SQL Server 2005 and now with SQL Server 2008, Microsoft introduced a few more dynamic management views to also assist with identifying possible index candidates based on query history.

The dynamic management views are:

Like most of the statistics that are tracked for the DMVs, these basically work the same where once the instance of SQL Server is restarted the data gets cleared out. So if you are working in a test environment and restart your instance of SQL Server these views will probably return no data.

SQL: Understanding SQL Server Net Libraries

Reference: http://www.mssqltips.com/sqlservertip/2320/understanding-sql-server-net-libraries/

Understanding SQL Server Net Libraries

Problem

I know that SQL Server and its clients interact with each other using different Net-Libraries or network protocols. But what are these Net-Libraries that SQL Server uses and how do each of these differ from the others. Also, in which scenario is one preferred over the others?

Solution

SQL Server provides different Net-Libraries, known as Dynamic-Link Library (DLL), to let clients communicate with SQL Server using different network protocols. These Net-Libraries specify which networking protocols to use while exchanging data back and forth between SQL Server and the clients. SQL Server provides Net-Libraries for almost all the commonly used networking protocols namely TCP/IP, Named Pipes, Shared Memory, etc… These Net-Libraries transparently send the requests from the client to SQL Server and return responses back from SQL Server to the clients.

Depending on what Net-Libraries you have enabled on the SQL Server, it starts listeners for these Net-Libraries. You can configure SQL Server to listen simultaneously on any combination of Net-Libraries. In other words, when the net-library listener on the server receives the TDS (Tabular Data Stream which is the format in which client requests are broken/formatted before sending it to the server as well as server responses broken/formatted before sending it back to clients) packets from the network, it reassembles them into original client requests and passes it up for processing. Please note each specific type of SQL Server Net Library listener will listen for the client requests if they are coming from the same type of Net Library (a matching pair of Net-Libraries must be enabled on both SQL Server and clients) and a connection from the client can be done using a single protocol only. For example, if the client sends the request to SQL Server using TCP/IP then on the server only the TCP/IP Net Library listener will pick up this request. Also, SQL Server will use the same network stack, over which the request was received, to send the response back to the client.

These are the different Net-Library options that are available:

Shared Memory
When you have a client which is on the same physical machine as your SQL Server, you can use this simplest net library which provides faster communication than other Net-Libraries as it bypasses the physical network stack. The reason is, it uses the Windows Shared Memory feature to communicate between SQL Server and the client. You might use this Net-Library while troubleshooting if other Net-Libraries are not working properly. This Net-Library is by default enabled and used when you specify either a period or (local) as your machine name or localhost or machine name or by prefixing machineinstance name with lpc: when connecting to a SQL Server instance, you can find more details here.

TCP/IP
TCP/IP (Transmission Control Protocol/Internet Protocol) is the default protocol on the clients running on the Windows platform. It allows communication across interconnected networks of computers with diverse hardware and operating systems. TCP/IP is considered as the first choice after Shared Memory, because of its inherent network traffic routing standards and advanced security features.

Named Pipes
Named Pipes protocol uses a part of memory of one process to store/pass the information to another process, it means the output of one process becomes the input for another process. The other process could be either on the local machine or could be on a remote machine (networked machine). This protocol is used in fast local area networks for better performance.

VIA
VIA stands for Virtual Interface Adaptor. This protocol works with VIA hardware. Please refrain from using this protocol in your newer development as it has been deprecated and will be removed in future releases of SQL Server.

Normally TCP/IP is the preferred protocol in a slow LAN, WAN or dial up network whereas Named Pipes is more considerable in fast local area networks, to learn more about these differences click here.

The script provided in script #1 below gives the details about existing connections like when the connection was established and what protocol is being used by that particular connection.

–Script #1 – Identifying the protocols being used by current connections
SELECT connection_id, connect_time, net_transport, net_packet_size, client_net_address
FROM sys.dm_exec_connections

SQL: Using SQL Server DMVs to Identify Missing Indexes

Reference: http://www.mssqltips.com/sqlservertip/1634/using-sql-server-dmvs-to-identify-missing-indexes/

Using SQL Server DMVs to Identify Missing Indexes

Problem
In a previous tip, Deeper insight into unused indexes for SQL Server, we discussed how to identify indexes that have been created but are not being used or used rarely. Now that I know which indexes I can drop, what is the process to identify which indexes I need to create. I can run the Database Tuning Advisor or examine the query plans, but is there any easier way to determine which indexes may be needed?

Solution
As with a lot of the new features we have seen with SQL Server 2005 and now with SQL Server 2008, Microsoft introduced a few more dynamic management views to also assist with identifying possible index candidates based on query history.

The dynamic management views are:

Like most of the statistics that are tracked for the DMVs, these basically work the same where once the instance of SQL Server is restarted the data gets cleared out. So if you are working in a test environment and restart your instance of SQL Server these views will probably return no data.

SQL: Understanding SQL Server Statistics (Good One)

Reference: http://blog.idera.com/sql-server/understanding-sql-server-statistics/

Understanding SQL Server Statistics

If there’s an upcoming election and you are running for office and getting ready to go from town to town city to city with your flyers, you will want to know approximately how many flyers you’re going to bring.

If you’re the coach of a sports team, you will want to know your players’ stats before you decide who to play when, and against who. You will often play a matchup game, even if you have 20 players, you might be allowed to play just 5 at a time, and you will want to know which of your players will best match up to the other team’s roster. And you don’t want to interview them one by one at game time (table scan), you want to know, based on their statistics, who your best bets are.

Just like the election candidate or the sports coach, SQL Server tries to use statistics to “react intelligently” in its query optimization. Knowing number of records, density of pages, histogram, or available indexes help the SQL Server optimizer “guess” more accurately how it can best retrieve data. A common misnomer is that if you have indexes, SQL Server will use those indexes to retrieve records in your query. Not necessarily. If you create, let’s say, an index to a column City and <90% of the values are ‘Vancouver’, SQL Server will most likely opt for a table scan instead of using the index if it knows these stats.

For the most part, there *may* be minimal we need to do to keep our statistics up-to-date (depending on your configurations), but understanding statistics a little bit better is in order to help us understand SQL Server optimization a little bit more.

How are statistics created?

Statistics can be created different ways
– Statistics are automatically created for each index key you create.

– If the database setting autocreate stats is on, then SQL Server will automatically create statistics for non-indexed columns that are used in queries.

– CREATE STATISTICS

What do statistics look like?

If you’re curious, there’s a couple ways you can peek at what statistics look like.

Option 1 – you can go to your Statistics node in your SSMS, right click > Properties, then go to Details. Below is a sample of the stats and histogram that’s collected for one of the tables in my database

Option 2 – you can use DBCC SHOW_STATISTICS WITH HISTOGRAM

The histograms are a great way to visualize the data distribution in your table.

How are statistics updated?

The default settings in SQL Server are to autocreate and autoupdate statistics.

Notice that there are two (2) options with the Auto Update statistics.
Auto Update Statistics basically means, if there is an incoming query but statistics are stale, SQL Server will update statistics first before it generates an execution plan.
Auto Update Statistics Asynchronously on the other hand means, if there is an incoming query but statistics are stale, SQL Server uses the stale statistics to generate the execution plan, then updates the statistics afterwards.

However, if you want to manually update statistics, you can use either sp_updatestats or UPDATE STATISTICS <statistics name>

How do we know statistics are being used?

One good check you can do is when you generate execution plans for your queries:

check out your “Actual Number of Rows” and “Estimated Number of Rows”.

If these numbers are (consistently) fairly close, then most likely your statistics are up-to-date and used by the optimizer for the query. If not, time for you to re-check your statistics create/update frequency.

What configuration settings should we set?

There may be cases when you may want to disable statistics update temporarily while you’re doing massive updates on a table, and you don’t want it to be slowed down by the autoupdate.

However, for the most part, you will want to keep the SQL Server settings:
– auto create statistics
– auto update statistics

References:

Rob Carrol. http://blogs.technet.com/b/rob/archive/2008/05/16/sql-server-statistics.aspx

Elisabeth Redei has an excellent 3-part series on SQL Server Statistics:
http://sqlblog.com/blogs/elisabeth_redei/archive/2009/03/01/lies-damned-lies-and-statistics-part-i.aspx
http://sqlblog.com/blogs/elisabeth_redei/archive/2009/08/10/lies-damned-lies-and-statistics-part-ii.aspx
http://sqlblog.com/blogs/elisabeth_redei/archive/2009/12/17/lies-damned-lies-and-statistics-part-iii-sql-server-2008.aspx

Excellent Books that touch on statistics
– Apress. Grant Fritchey & Sajal Dam. SQL Server 2008 Query Performance Tuning Distilled.
RedGate. Holger Schmeling. SQL Server Statistics.

SQL: Difference Between SQL Vs. T-SQL?

Reference: http://blog-mstechnology.blogspot.in/2012/04/difference-between-sql-vs-t-sql.html

Difference Between SQL Vs. T-SQL?

SQL and T-SQL are Query languages, both helps to extract, Manipulate data with database system.

SQL – Structured Query Language:-

It is pronounced as SEQUEL, stands for… Structured English QUEry Languages.

  • SQL Language has several language elements called clauses, phrases, predicates, queries and statements.
  • SQL Supports to insert/update/read/delete the data from/to the database, define the new structure, etc.,
  • SQL supports different data types like image, text, varchar, datetime,xml, etc., SQL supports only single line transactions

T-SQL – Transact-SQL:-

T-SQL is advanced/addition to the SQL developed by Microsoft. It has new features like, variable, temp tables, set of transactions, store procedures, functions, cursors etc.., T-SQL supports control flow capabilities by using the following commands: START and TERMINATE, break, CONTINUE, GOTO, RETURN, IF, While and WAITFOR. T-SQL allows the ‘FROM’ clause to be added to DELETE and UPDATE statements. T-SQL allows inserting multiple rows using single statement BULK INSERT.

SQL: Exporting SQL Data in CSV format

Reference: http://ask.sqlservercentral.com/questions/7080/export-results-of-a-query-into-a-csvtxt-file.html

Method 1:

Straight to file

In Management Studio, open a query window and set the ‘results to file’ by either

· Query -> results to -> Results to file

· ctrl – Shift – F

· click the ‘Results to file’ icon on the toolbar

set the output to be comma separated

· (Tools -> Options -> Query results -> SQL Server -> Results to text -> Output format = ‘Comma delimited’

then run your query

select * from public_view where datefield = currentdate

Method 2:

In Management Studio, open a query window and set the results to text by either

· (Query -> results to -> Results to text

· ctrl – T

· click the ‘Results to text’ icon on the toolbar

set the output to be comma separated

· (Tools -> Options -> Query results -> SQL Server -> Results to text -> Output format = ‘Comma delimited’

then run your query

select * from public_view where datefield = currentdate

select all the results and paste into a file

Method 3:

Use BCP

probably best to review this in BOL

xp_cmdshell 'bcp "query" out filename.csv'