Saturday, January 26, 2008

google's incredible indexing speed - wow!

what an honor :-)




no comment !

DUMPBIN - thats cool

get know of your assemblies with dumpbin

The Microsoft COFF Binary File Dumper (DUMPBIN.EXE) displays information about 32-bit Common Object File Format (COFF) binary files. You can use DUMPBIN to examine COFF object files, standard libraries of COFF objects, executable files, and dynamic-link libraries (DLLs).

http://support.microsoft.com/kb/177429

An interface has been developed for this nice tool: http://www.cheztabor.com/dumpbinGUI/index.htm

.Net CLR Memory Performance Monitoring

If you need to monitor you memory the usage of Perfmon is needed to start. There are also some shareware and MS CLR Profiler available but lets explain the basic's for the usage of Perfmon.

to start perfmon: start -> run: type: perfmon and the Performance SnapIn window appears.



the window starts with some default counters, usually you those are not the counters you will have to look on. Therefor you need to choose the right selection, which you do with right click -> add counters. Then you can select your requested counters as shown below:




Lets describe the different counters we have:

  • # Bytes in all Heaps: This counter is the sum of four other counters; Gen 0 Heap Size; Gen 1 Heap Size; Gen 2 Heap Size and the Large Object Heap Size. This counter indicates the current memory allocated in bytes on the GC Heaps.
  • # GC Handles: This counter displays the current number of GC Handles in use. GCHandles are handles to resources external to the CLR and the managed environment. Handles occupy small amounts of memory in the GCHeap but potentially expensive unmanaged resources.
  • # Gen 0 Collections: This counter displays the number of times the generation 0 objects (youngest; most recently allocated) are garbage collected (Gen 0 GC) since the start of the application. Gen 0 GC occurs when the available memory in generation 0 is not sufficient to satisfy an allocation request. This counter is incremented at the end of a Gen 0 GC. Higher generation GCs include all lower generation GCs. This counter is explicitly incremented when a higher generation (Gen 1 or Gen 2) GC occurs. _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.
  • # Gen 1 Collections: This counter displays the number of times the generation 1 objects are garbage collected since the start of the application. The counter is incremented at the end of a Gen 1 GC. Higher generation GCs include all lower generation GCs. This counter is explicitly incremented when a higher generation (Gen 2) GC occurs. _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.
  • # Gen 2 Collections: This counter displays the number of times the generation 2 objects (older) are garbage collected since the start of the application. The counter is incremented at the end of a Gen 2 GC (also called full GC). _Global_ counter value is not accurate and should be ignored. This counter displays the last observed value.
  • # Induced GC: This counter displays the peak number of times a garbage collection was performed because of an explicit call to GC.Collect. Its a good practice to let the GC tune the frequency of its collections.
  • # of Pinned Objects: This counter displays the number of pinned objects encountered in the last GC. This counter tracks the pinned objects only in the heaps that were garbage collected e.g. a Gen 0 GC would cause enumeration of pinned objects in the generation 0 heap only. A pinned object is one that the Garbage Collector cannot move in memory.
  • # of Sink Blocks in use: This counter displays the current number of sync blocks in use. Sync blocks are per-object data structures allocated for storing synchronization information. Sync blocks hold weak references to managed objects and need to be scanned by the Garbage Collector. Sync blocks are not limited to storing synchronization information and can also store COM interop metadata. This counter was designed to indicate performance problems with heavy use of synchronization primitives.
  • # Total committed Bytes: This counter displays the amount of virtual memory (in bytes) currently committed by the Garbage Collector. (Committed memory is the physical memory for which space has been reserved on the disk paging file).
  • # Total reserved Bytes: This counter displays the amount of virtual memory (in bytes) currently reserved by the Garbage Collector. (Reserved memory is the virtual memory space reserved for the application but no disk or main memory pages have been used.)
  • % Time in GC: % Time in GC is the percentage of elapsed time that was spent in performing a garbage collection (GC) since the last GC cycle. This counter is usually an indicator of the work done by the Garbage Collector on behalf of the application to collect and compact memory. This counter is updated only at the end of every GC and the counter value reflects the last observed value; its not an average.
  • Allocated Bytes/sec: This counter displays the rate of bytes per second allocated on the GC Heap. This counter is updated at the end of every GC; not at each allocation. This counter is not an average over time; it displays the difference between the values observed in the last two samples divided by the duration of the sample interval.
  • Finalization Survivors: This counter displays the number of garbage collected objects that survive a collection because they are waiting to be finalized. If these objects hold references to other objects then those objects also survive but are not counted by this counter; the "Promoted Finalization-Memory from Gen 0" and "Promoted Finalization-Memory from Gen 1" counters represent all the memory that survived due to finalization. This counter is not a cumulative counter; its updated at the end of every GC with count of the survivors during that particular GC only. This counter was designed to indicate the extra overhead that the application might incur because of finalization.
  • Gen 0 Heap Size: This counter displays the maximum bytes that can be allocated in generation 0 (Gen 0): its does not indicate the current number of bytes allocated in Gen 0. A Gen 0 GC is triggered when the allocations since the last GC exceed this size. The Gen 0 size is tuned by the Garbage Collector and can change during the execution of the application. At the end of a Gen 0 collection the size of the Gen 0 heap is infact 0 bytes; this counter displays the size (in bytes) of allocations that would trigger the next Gen 0 GC. This counter is updated at the end of a GC; its not updated on every allocation.
  • Gen 0 Promted Bytes/Sec: This counter displays the bytes per second that are promoted from generation 0 (youngest) to generation 1; objects that are promoted just because they are waiting to be finalized are not included in this counter. Memory is promoted when it survives a garbage collection. This counter was designed as an indicator of relatively long-lived objects being created per sec. This counter displays the difference between the values observed in the last two samples divided by the duration of the sample interval.
  • Gen 1 heap size: This counter displays the current number of bytes in generation 1 (Gen 1); this counter does not display the maximum size of Gen 1. Objects are not directly allocated in this generation; they are promoted from previous Gen 0 GCs. This counter is updated at the end of a GC; its not updated on every allocation.
  • Gen 1 Promted Bytes/Sec: This counter displays the bytes per second that are promoted from generation 1 to generation 2 (oldest); objects that are promoted just because they are waiting to be finalized are not included in this counter. Memory is promoted when it survives a garbage collection. Nothing is promoted from generation 2 since it is the oldest. This counter was designed as an indicator of very long-lived objects being created per sec. This counter displays the difference between the values observed in the last two samples divided by the duration of the sample interval.
  • Gen 2 heap size: This counter displays the current number of bytes in generation 2 (Gen 2). Objects are not directly allocated in this generation; they are promoted from Gen 1 during previous Gen 1 GCs. This counter is updated at the end of a GC; its not updated on every allocation.
  • Large Object Heap Size: This counter displays the current size of the Large Object Heap in bytes. Objects greater than 20 KBytes are treated as large objects by the Garbage Collector and are directly allocated in a special heap; they are not promoted through the generations. This counter is updated at the end of a GC; its not updated on every allocation.
  • Process ID: This counter displays the process ID of the CLR process instance being monitored.
  • Promted Finalization-Memory from Gen 0: This counter displays the bytes of memory that are promoted from generation 0 to generation 1 just because they are waiting to be finalized. This counter displays the value observed at the end of the last GC; its not a cumulative counter.
  • Promted Memory from Gen 0: This counter displays the bytes of memory that survive garbage collection (GC) and are promoted from generation 0 to generation 1; objects that are promoted just because they are waiting to be finalized are not included in this counter. This counter displays the value observed at the end of the last GC; its not a cumulative counter.
  • Promted Memory from Gen 1: This counter displays the bytes of memory that survive garbage collection (GC) and are promoted from generation 1 to generation 2; objects that are promoted just because they are waiting to be finalized are not included in this counter. This counter displays the value observed at the end of the last GC; its not a cumulative counter. This counter is reset to 0 if the last GC was a Gen 0 GC only.

GC.Collect - how it really works - .net Garbage Collection

The GC (Garbage Collection) in .Net is a Win32 process within HEAP which is initially created by the OS (Operation System) based on CLR request. This request add 2 segments of 16 MB each (16kb committed). One of them is allocated for the Gen0, Gen1 & Gen2 and the second segment is used for LOH (Large Object Heap). Objects greater than 20 KBytes are treated as large objects by the Garbage Collector and are directly allocated in a special heap

While we start to create small objects the first segment starts to grow. If we suppose that we keep all instantiated objects without ever releasing any instance until the segment gets full, the CLR ask OS for another segment of 16MB (16kb committed) and continues to allocate object space from that new segment which has been received by the operating system.

If we free up memory, lets suppose all allocated 32 MB, CLR leaves you still with a HEAP size of 32 MB even that nothing is committed. The GC still held all free space and will not return it to the OS until we use a memory pressure.

With the memory pressure the operating system sending a signal to the CLR to trim the working set and the CLR will return additional segments to OS.

The conclusion of it is simple, there is plenty of free memory and the operating system does not claim anything from running processes.

Friday, January 25, 2008

Performance average comparsion between State Server and indeXus.Net Shared Cache

Performance Environment and preconditions:
Client: MS Windows XP Prof, Intel Pentium D, 2.8GHz, 3.24 GB RAM
Server: MS Windows Server 2003, Enterprise Edition SP2, Intel Pentium D, 2.8GHz, 3.24 GB RAM
Iterations: 1000

Object Size: ~1kb
Debug Mode: True

Session State on same server:
In-Process: 225 requests / second
Out-Of-Process: 165 requests / second

Session State on remote server:
Out-Of-Process, 55 requests/second (-68%)

indeXus.Net Shared Cache same server:
add: 3s 128ms -> 319.6 req/sec
get: 3s 096ms -> 322.3 req/sec
remove: 2s 868ms -> 348.6 req/sec

indeXus.Net Shared Cache remote server:
add: 3s 425ms -> 291.9 req/sec
get: 3s 620ms -> 276.2 req/sec
remove: 3s 556ms -> 297.9 req/sec



original page:
http://www.codeplex.com/SharedCache/Wiki/View.aspx?title=Speed%20Test%20Results

additional comparsions will be added soon, also with sql server

Tuesday, January 22, 2008

make more test does not mean you gain code coverage

Today's conclusion is based on the the following number flow: 2.6 % my coverage gained. Sure it's better then nothing but the conclusion is quite simple.

After the first bunch of Unit Test I have written i reached a coverage of 49.86% with a total of 70 unit test. Maybe the proper name of Unit Tests is wrong the some of them are also testing communication between the client and server module, but even then, lets make it easy and now we call them Unit Test which are starting indeXus.Net Shared Cache Server upon start up to test all different transport options over the wire.
Yesterday's Print screen with a total of 70 Unit Test:


And today's print screen with an additional 30 Unit Test:


more detailed view:



In nature of coverage the change is small but in sense of testing purposes my whole protocol is tested now. This is not less important then the coverage but I still held on the number to reach a coverage of 90%

Monday, January 21, 2008

SQL Server SELECT TOP x equivalent in ORACLE

No commet !

SQL Server:
SELECT TOP 10 * FROM myTable

ORACLE:
SELECT * FROM myTable WHERE ROWNUM <= 10

MySQL:
SELECT * FROM myTable LIMIT 10

Code Coverage Part 1 - in combination with NDepend

Today's conclusion is simple ... write your unit codes while you develope your code. I have done the mistake again and thought I will do it later ... (now I have the salat!!! - swiss-german expression!)

so bunch no. 1 of xxx has been checked in to indeXus.Net Shared Cache. I reached almost 50% but notice that there is difference between to reach a percentage amount of tests and good written test. Proberly its combination of both like everything in life!

Anyway the target is to reach with high quality test a percentage rate of apporx. 90%



since I own a lic. for indeXus.Net Shared Cache of NDepend - http://www.ndepend.com/ I'm amused what stuff I have sometimes develped and could be done better. Soon I'll publish some of my first conclusions I have learned through NDepend :-) An amazing tool which I can suggest to everybody!

How easy it is to find stuff like big methods or poorly commented methods ;-) here an example (maybe I should mention that most project I have seen till today would have worster results then indeXus.Net Shared Cache)



Saturday, January 19, 2008

Backup your blog from blogger.com

well done - have found to today the way how I gone backup my posts:

http://www.codeplex.com/bloggerbackup




Prototype JavaScript Image Cropper UI for .net

thats a kind of amazing - do I ever have seen such an amazing control?

http://www.defusion.org.uk/code/javascript-image-cropper-ui-using-prototype-scriptaculous/

Friday, January 18, 2008

Fastest Way to find an object that contains a property in an ArrayList

Dont use an ArrayList implmente IDictonary or ICollection so you can
access with col[YourKey]



   1:  using System;

   2:  using System.Collections;

   3:  using System.Collections.Generic;

   4:   

   5:  public class MyClass

   6:  {

   7:      public class TestClass

   8:      {

   9:          public string MyName = string.Empty;

  10:          public TestClass(string name)

  11:          {

  12:              this.MyName = name;

  13:          }

  14:      }

  15:      public static void Main()

  16:      {

  17:          Hashtable ht = new Hashtable();

  18:          string [] findKey = new string[] { 

  19:                  new Random().Next(1, 1000).ToString(),

  20:                  new Random().Next(1, 1000).ToString(),

  21:                  new Random().Next(1, 1000).ToString(),

  22:                  new Random().Next(1, 1000).ToString(),

  23:                  new Random().Next(1, 1000).ToString(),

  24:                  new Random().Next(1, 1000).ToString(),

  25:                  new Random().Next(1, 1000).ToString(),

  26:                  new Random().Next(1, 1000).ToString(),

  27:                  new Random().Next(1, 1000).ToString(),

  28:                  new Random().Next(1, 1000).ToString(),

  29:                  new Random().Next(1, 1000).ToString(),

  30:                  new Random().Next(1, 1000).ToString(),

  31:                  new Random().Next(1, 1000).ToString(),

  32:                  new Random().Next(1, 1000).ToString(),

  33:                  new Random().Next(1, 1000).ToString(),

  34:                  new Random().Next(1, 1000).ToString(),

  35:                  new Random().Next(1, 1000).ToString(),

  36:                  new Random().Next(1, 1000).ToString(),

  37:                  new Random().Next(1, 1000).ToString(),

  38:                  new Random().Next(1, 1000).ToString(),

  39:                  new Random().Next(1, 1000).ToString(),

  40:                  new Random().Next(1, 1000).ToString(),

  41:                  new Random().Next(1, 1000).ToString(),

  42:                  new Random().Next(1, 1000).ToString(),

  43:                  new Random().Next(1, 1000).ToString(),

  44:                  new Random().Next(1, 1000).ToString(),

  45:                  new Random().Next(1, 1000).ToString(),

  46:                  new Random().Next(1, 1000).ToString(),

  47:                  new Random().Next(1, 1000).ToString(),

  48:                  new Random().Next(1, 1000).ToString(),

  49:                  new Random().Next(1, 1000).ToString(),

  50:                  new Random().Next(1, 1000).ToString(),

  51:                  new Random().Next(1, 1000).ToString(),

  52:                  new Random().Next(1, 1000).ToString(),

  53:                  new Random().Next(1, 1000).ToString(),

  54:                  new Random().Next(1, 1000).ToString(),

  55:                  new Random().Next(1, 1000).ToString(),

  56:                  new Random().Next(1, 1000).ToString(),

  57:                  new Random().Next(1, 1000).ToString(),

  58:                  new Random().Next(1, 1000).ToString(),

  59:                  new Random().Next(1, 1000).ToString(),

  60:                  new Random().Next(1, 1000).ToString(), 

  61:                  new Random().Next(1, 1000).ToString()

  62:              };        

  63:          

  64:          for(int i = 0; i< 1000; i++)

  65:          {

  66:              TestClass t = new TestClass(i.ToString());

  67:              ht.Add(i.ToString(),t);

  68:          }

  69:          DateTime startTime = DateTime.Now;

  70:          foreach(string abc in findKey)

  71:          foreach(DictionaryEntry de in ht)

  72:          {

  73:              if(de.Key == abc)

  74:              {

  75:                  TestClass tt = de.Value as TestClass;

  76:                  Console.WriteLine(tt.MyName);

  77:                  break;

  78:              }            

  79:          }

  80:          

  81:          DateTime stopTime = DateTime.Now;

  82:          TimeSpan duration = stopTime - startTime;

  83:   

  84:          Console.Write(@"foreach took (ms): ");

  85:          Console.WriteLine(duration.Milliseconds);

  86:          startTime = DateTime.Now;

  87:          foreach(string abc in findKey)

  88:          {

  89:              TestClass tt1 = ht[abc] as TestClass;

  90:              Console.WriteLine(tt1.MyName);

  91:          }        

  92:          

  93:          stopTime = DateTime.Now;

  94:          duration = stopTime - startTime;

  95:          

  96:          Console.Write(@"key took (ms): ");

  97:          Console.WriteLine(duration.Milliseconds);        

  98:          

  99:          Console.ReadLine();

 100:      }

 101:  }

Thursday, January 17, 2008

sorting dictionary by value

oh hell how i didn't thought about this before!!!


public static Dictionary<string,long> SortDictionary(Dictionary<string,long> data)
{
  List<keyvaluepair<string,long>> result =
        new List<keyvaluepair<string,long>>(data);
  result.Sort(
    delegate(
      KeyValuePair<string,long> first,
      KeyValuePair<string,long> second)
        {
          return second.Value.CompareTo(first.Value);
        }
    );
  return data;
}

if you want the smallest amount be on top you need to switch the following line from:

return second.Value.CompareTo(first.Value);

to

return first.Value.CompareTo(second.Value);


the same method you can use also for: Dictionary<string,int> or Dictionary<string,string> or each value type of your choice.

--
happy sort of dictonary by value ;-) yes I love generics ... just sometimes I don't see through its possibilites

Wednesday, January 16, 2008

Text File does not get deployed from Class Library Project into Asp.NET Web Application

The issue sounds simple to me:

Text File does not get deployed from Class Library Project into Asp.NET Web Application

"We have a webproject and would like to deploy HTML and
Text Templates from a common Project into the web project
output folder.

The Attributes on the text / html files are set like this:
Build Action: Content
Copy to Output Directory: Copy Always
it just copy the files into the common csharp project under the bin/debug
but it doesn't appears within the web project.

any idea how to solve this issue without a lot of post build scripts?"

This have been posted 2 days ago I posted the following problem at:
http://forums.asp.net/t/1205184.aspx

To fix this issue you simply need to think in different direction - the oposite one :-)

Step 1: Don't add your files you would like to share in TFS (Team Foundation Server) into your Class Library Project - add files (html / xml / text / etc.) into your App_Data folder in your Asp.NET Web Application Project Type.
Step 2: Create your Folder within your Class Library Project you like to have the files. Once you have created the folder, you can right click on the folder and select : Add -> Existing Item

This will open you a dialog folder with a small HINT which is the BIG SHOOT for this issues. Don't simply add it while you press "Add" you need to select the small arrow near the Button and select "Add As Link" like the below printscreen shows:















Once you have done above steps, select your file and look at the file properties within your Visiual Studio. There you find some Attributes you need to check, you should check that the "Build Action:" is equal to "Content" and the "Copy to Output Directory:" is set to "Copy Always" or "Copy if newer".

In my case I do not need now to held the files redunded and sync them upon changes ....


just think different you will solve the problem!! was the motto upon this issue !

Sunday, January 13, 2008

convert a string to a byte array and convert a byte array to a string

The .Net framework offers a quite easy way to handle this:

convert a string to a byte array:
byte[] buffer = System.Text.Encoding.ASCII.GetBytes("any string you like, e.g: indeXus.Net Shared Cache - the distributed caching solution");

convert a byte array to a string:
string fromByteArray = System.Text.Encoding.ASCII.GetString(buffer);
Console.WriteLine(fromByteArray);

Result will be:
"any string you like, e.g: indeXus.Net Shared Cache - the distributed caching solution"

Wednesday, January 09, 2008

Topology Quiz no. 1: What cache topology would be optimal for this application type

- Caching 2GB of data
- Read-Heavy, updated nightly
- Several hundred users
- Several thousands requests per minute

feel free to discuss this at sharedcache - your .net distributed caching solution

Topology Quiz no. 2: What cache topology would be optimal for this application type


- caching user preferences on an in-house application

- several hundred concurrent users

- pref. updated several times a day

feel free to discuss this at sharedcache - your .net distributed caching solution

Topology Quiz no. 3: What cache topology would be optimal for this application type

- logging user interactions to a database for internal auditing purposes
- 1000 updates per minute


feel free to discuss this at sharedcache - your .net distributed caching solution

Wednesday, January 02, 2008

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 7

SharedCache thought no. 7: scope's and expectations

lets start to talk about expectations, do not assume and verify that it works exactly as expected. Take advantage of perfmon and the notify application to monitor the cache, what it contains and how it works.

Keep things in the right scope, which means do not use Cache functionality for session information. HTTP Session objects can be used for caching user- and / or session specific information but don't use them as a cache for global information - and the opposite.

------------------

Download your copy of SharedCache: http://www.sharedcache.com
SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions. Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 6

SharedCache thought no. 6: Identities and Keys


Like everywhere you can make yourself an easier life while you have a clear idea on how you will proceed with your identities / keys. If you create an identity class ensure equal() will be the same. A tip to all those people to all the people they don't like to override the ToSting() method of their objects. Its might be very helpful for debugging to have a good override method. From my perspective a native .Net structure match all my needs, e.g. string , long, int, etc.

------------------

Download your copy of SharedCache:
http://www.sharedcache.comSharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions. Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 5

SharedCache thought no. 5: Optimize Serialization

Serialization - easy - just add an attribute on top of the class and we are done, right? Objects that are stored in cache need to be serialized and the default CLR (.net) standard / default serialization is inefficient and has poor performance. beside the fact that the object output size is huge which has another impact: Memory usage can be reduced by 50% by implementing the ISerializable() attribute.

Another very interesting point is that depend on your object data structure, consider to serialize using data streams instead of object streams this can make a phenomenal impact in the positive way. Serialization performance improvements are up to an order of magnitude, the reduction in size can be up to 80%!!!

Which is actually not less this is a huge difference!

------------------

Download your copy of SharedCache: http://www.sharedcache.comSharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.

Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 4

SharedCache thought no. 4: Granularity!

its a great keyword but we need to consider about it. Every application contains a natural granularity for all data. RDBMS* have a normalized granularity which are exposed over the table. Lets take ODBC / JDBC - based applications which provide normally an statement execution granularity and a result set granularity. ORM based applications and cache intensive applications often have an object oriented granularity that mirrors the data model from your RDBMS*.


The conclusion is the following: Normally each business object class provides a cache relation. Lets take the country and region, both will pick up data from the cache and not from the RDBMS after the first initial load. Each object intends to have a natural key. Lets say we can take the countryId or regionId or even make a combination with some other data like this: countryId + Iso2Code ;

Application Objects are ususally very complex with a lot other options, lets say that you enable your country class to contain a list with all its regions. If the regions already in the cache you dont need to access your RDBMS* system for nothing ;-)

Create yourself a caching strategy, this will make you life easier to manage large object graphs and enables efficient lazy loading functionality.

------------------

Download your copy of SharedCache: http://www.sharedcache.com

SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.

Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 3

SharedCache thought no. 3: Modeling is religion - no its a tool!


Design Mode its not new at all, then any object oriented developer have using domain modeling for years. Application developers they act as Database Administrators have used modeling to to break down their ideas into a Database design that could provide both sides: optimal application implementation and optimal database organisation.


Now what is this Domain Model? It is a combination between Data Model and Behavioral Model. The data model describes actually nothing else then the state the application maintains in terms of persistence and run-time data (e.g.: session data / request data / response data / etc. ). The Domain Model is not so far away from the SOA (Service Oriented Architecture). The Data model is actually analog to the information which is encapsulated and managed behind a set of services. While the behavioural model is the same like a set of services which are exposed by brokers. The value behind it is simple: Data Model exist independently of the behavioural model, supporting the separation of a controller from the model in an MVC architecture.

------------------

Download your copy of SharedCache: http://www.sharedcache.com

SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.

Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 2

SharedCache thought no. 2: How do I gone access my data?


there are almost endless way's how to retrieve data from your RDBMS* system.


Here a small list of options:

  • ORM (Object Relational Mappers) [my favorite: MyGeneration thanks to Mike Griffin for this amazing tool]
  • ODBC / JDBC (or other direct API's)
  • own implementation [ your are gone to work very hard :-) ]


in almost every application there is a "best way" to access your data. A large-scale set oriented application will usually be ODBC / JDBC oriented. While a mix of set- and identiyoriented approaches are indications for ORM's. Its very important to understand and to know your application to make the decision, most applications have a best way depends on their scalability requirements and development time.


Consider: Databases don't hate anything more then to retrieve data of one single row! To choose the wrong approach is disastrous!!


Lets try to say it in this way: RDBMS (ODBC / JDBC) systems are optimized for set based queries and operations including joins and data aggregation. They will crumble with heavy row-level access applications (1+N access patterns).
Its always a good approach to discuss an architecture open, then you have to consider many things and you should be careful while you're taking your decision about it. Not always the obvious way is the "best choice".


Most applications have a mix of intensive row-level and large set-level operations, which lend themselves poorly to any noted approach. I haven't seen till today any architecture which the issue doesn't come up: "exceptions to the rule"! Even if you have a well-architecture and carefully designed application there can be always exceptions that require to break the rule for a specific approach how to access data.


Normally, or at least in 90% of the cases I have seen till today, its possible to cache ODBC result sets. I have also seen ORM systems which are able to make SQL query set-based optimization that performs the entire operation within the RDBMS. You should always consider how your application is consuming data.

------------------

Download your copy of SharedCache: http://www.sharedcache.com

SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.

Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.


SharedCache is free and open source the only way you can see exactly what happens within your application.


* - RDBMS - relational database management system

** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 2

SharedCache thought no 1: Define a clear delineation for the cache responsibility

You could e.g. define to use caching in your model within your DAO**, where the cache holds the object value. The goal is to ensure that all accesses are served from SharedCache instead of your RDBMS*. The access latency to SharedCache will be some 0.001 sec. while your call to the RDBMS* can take up to 0.050 sec. - it does not seems to make much difference but while we are talking about scalable systems - here the changes starting.

------------------

Download your copy of SharedCache:
http://www.sharedcache.com
SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.

Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.


SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system
** - DAO - Data Access Objects

architecture and distributed caching thoughts in combination with indeXus.Net SharedCache - Part 1

One of the discussions which always comes up and people have asked for over the years is to develop an ASP.Net Web-Application using model-view-controller (MVC).

So what is a Model View Controller (MVC) Framework?

MVC is a framework methodology. The concept of a domain model is technology-neutral. It can even exist only in the abstract -> modeling is a tool - not a religion !!!

  • "Models" domain specific representation of the information the application displays and on which it operates. (e.g. we might have a Country class which represents all regions from the Country table inside used RDBMS*)
  • "Views" Renders the model into a form suitable for interaction, typically a user interface element or document. Typically this user interface is created off of the model data (for example: we create an Country "Edit" view that populate input boxes to maintain the information based on the current state).
  • "Controllers" responsible to act while a handling appears - typically are user actions or service requests and invokes changes on the model. Here its all about to control the response actions.

Scott Guthrie has populated 4 blog entries around this issue:

  1. ASP.NET MVC Framework - see also the ScreenCast from Scott Hanselman: ScottGu MVC Presentation and ScottHa Screencast from ALT.NET Conference
  2. ASP.NET MVC Framework (Part 1)
  3. ASP.NET MVC Framework (Part 2): URL Routing
  4. ASP.NET MVC Framework (Part 3): Passing ViewData from Controllers to Views

Okey enough to MVC!

Download your copy of SharedCache: http://www.sharedcache.com

SharedCache will soon release session provider, which will assist you to work with SharedCache and ASP.Net Sessions.
Soon i will add some additional thought about how to provide optimistic transactional handling with SharedCache.

SharedCache is free and open source the only way you can see exactly what happens within your application.

* - RDBMS - relational database management system

** - DAO - Data Access Objects

Tuesday, January 01, 2008

export data from sql server 2005 to xml

its very easy to export data from sql server 2005 to an xml file
without any stored procedures or anything else:

SELECT nCountryId, cName, cIso2, cIso3, cCapitalCity, cMapReference, cCurrency, cNameAr, cNameZh, cNameJa
FROM ConCountry
Where
(cName is not null or
ciso2 is not null or
ciso3 is not null or
cCapitalCity is not null or
cMapReference is not null or
cCurrency is not null or
cNameAr is not null or
cNameZh is not null or
cNameJa is not null)
and nCountryId > 0
FOR XML auto, Type, Elements <-- this is the key part of it

if you like to work with templates, the following link shows a very easy sample:
http://sqlxml.org/faqs.aspx?faq=29

Shared Cache - .Net Caching made easy

All information about Shared Cache is available here: http://www.sharedcache.com/. Its free and easy to use, we provide all sources at codeplex.

Facebook Badge