From the Art of Enterprise Information Architecture.
They are these data domains:
Metadata
--data of data
Master data
--such as customer, product, invoice
Operation data
--such as order
Unstructured data
--such as scanned signed agreements, emails related
Analytical data
--derived by putting operation data into an analytical context. typically a data movement of operational data into dedicated analytical system such as DW.
Quick tips or notes that probably reflects 20 percent of knowledge that usually does 80 percent of job.
Wednesday, June 18, 2014
SOA
SOA is an architectural style designed with the goal of achieving loose coupling among interacting services based on open standards and protocols.
A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by organizational units and software agents on behalf of their owners. Fine-grained services can be composed into coarse-grained services. Business processes are executed by weaving a series of services
A service is a unit of work done by a service provider to achieve desired end results for a service consumer. Both provider and consumer are roles played by organizational units and software agents on behalf of their owners. Fine-grained services can be composed into coarse-grained services. Business processes are executed by weaving a series of services
Friday, March 14, 2014
.Net Oracle Connection String
with tnsname
Data Source=MyOracleDB;User Id=myUsername;Password=myPassword;
Integrated Security=no;
common:
SERVER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost)(PORT=MyPort))(CONNECT_DATA=(SERVICE_NAME=MyOracleSID)));
uid=myUsername;pwd=myPassword;
Data Source=MyOracleDB;User Id=myUsername;Password=myPassword;
Integrated Security=no;
common:
SERVER=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyHost)(PORT=MyPort))(CONNECT_DATA=(SERVICE_NAME=MyOracleSID)));
uid=myUsername;pwd=myPassword;
Wednesday, February 19, 2014
Creating SQL Server 2012 Failover Cluster on Windows 12
Feb 19, 2014
This is my practice for today.
Facts:
Failover cluster provide high availability, but not scalability.
Network load balancing cluster provide both.
SQL Server failover cluster does not provide scalability.
This is my practice for today.
Facts:
Failover cluster provide high availability, but not scalability.
Network load balancing cluster provide both.
SQL Server failover cluster does not provide scalability.
There is only active-passive type of SQL Server cluster. NO active-active. that is talking about two clusters.misleading, unprofessional.
Impressions:
--do this on w12 the first:
dism /online /enable-feature /featurename:netfx3 /all /source:d:\sources\sxs
--It did not automatic failover until I manually make it to choose best possible owner.
--shutdown node that is currently owner, then the fail-over happened automatically
Summary:
--install iSCSI or other Shared storage solution the first.
(iSCSI san solution: iSCSI in windows 12, vitual san,freeNas,openFiler,starWind etc)
--have at least two shared disks
--install windows failover cluster
--create new SQL Server Cluster
--Add node to SQL Server Cluster
Detailed Steps:
1. Install Servers on VMWare, all with local network
1.1 DC
--name W12DC
--cluster failover feature
--DNS
-- iSCSC Target
--create two virtual disks on this server, one will be used as quorum
--join in domain
1.2 SVR1
--name: W12SVR1
--Failover cluster feature. install failover cluster on this machine.
--iSCSI initiator
1.3 SVR2
--name:W12SVR2
--faiover cluster
--iSCSI initiator
--join in domain
2.Create a windows failover cluster
add failover cluster on SVR1. inlcuding SVR2
Note: I tried to use only DC and SVR1, but failed due to iSCSI seems not working well on this configuration. Cluster on SVR1 and SVR2 is successfully. it seems you should not put iSCSI target and initiator on same node.
W12C1 is cluster name.
it's pretty quick to create the cluster on SVR1 and SVR2, while creating one for W12DC and W12SVR1 always failed.
Shared storage is automatically recognized.
3. Create SQL Server cluster
--new sql server faiover cluster installation on W12SVR1
it will fail if windows failover cluster is not installed at first place.
--I went back to windowsfaiover cluster manager to add extra disk for sql server to use because the first one is used as quarum disk.
--specify new cluster group name
--specify new network ip address for sql server cluster. used 192.168.1.11 since windows failover cluster used 192.168.1.10. want to have this defined to check ip addresses later.
--configure service account, authentication mode etc
--it failed one step to let me enable netfx3 feature and run it again.
--mount window 12 dvd, and run this to install netfx3 on both sv1 and sv2--I hated this
dism /online /enable-feature /featurename:netfx3 /all /source:d:\sources\sxs
--restart the installation -- this is why I hated lack of well-prepared releases.
4. add node to sql server failover cluster
--this node is already part of windows failover cluster environment, it discovers the previously created sql server cluster network name automatically to join in. give the node a name and join in the cluster network.
--cluster network configuration, it reuses the previously defined 1.11 address. good.
--much less steps, then start to install the software and configure it to be ready for use.
done
--During installation, I did not pay attention to how the database files are distributed since I have only one shared disk to use. But it seems installer automatically installed database onto shared storage so we do not need to bother to manually configure the system to identically use the files.
Impressions:
--do this on w12 the first:
dism /online /enable-feature /featurename:netfx3 /all /source:d:\sources\sxs
--It did not automatic failover until I manually make it to choose best possible owner.
--shutdown node that is currently owner, then the fail-over happened automatically
Summary:
--install iSCSI or other Shared storage solution the first.
(iSCSI san solution: iSCSI in windows 12, vitual san,freeNas,openFiler,starWind etc)
--have at least two shared disks
--install windows failover cluster
--create new SQL Server Cluster
--Add node to SQL Server Cluster
Detailed Steps:
1. Install Servers on VMWare, all with local network
1.1 DC
--name W12DC
--cluster failover feature
--DNS
-- iSCSC Target
--create two virtual disks on this server, one will be used as quorum
--join in domain
1.2 SVR1
--name: W12SVR1
--Failover cluster feature. install failover cluster on this machine.
--iSCSI initiator
1.3 SVR2
--name:W12SVR2
--faiover cluster
--iSCSI initiator
--join in domain
2.Create a windows failover cluster
add failover cluster on SVR1. inlcuding SVR2
Note: I tried to use only DC and SVR1, but failed due to iSCSI seems not working well on this configuration. Cluster on SVR1 and SVR2 is successfully. it seems you should not put iSCSI target and initiator on same node.
W12C1 is cluster name.
it's pretty quick to create the cluster on SVR1 and SVR2, while creating one for W12DC and W12SVR1 always failed.
Shared storage is automatically recognized.
3. Create SQL Server cluster
--new sql server faiover cluster installation on W12SVR1
it will fail if windows failover cluster is not installed at first place.
--I went back to windowsfaiover cluster manager to add extra disk for sql server to use because the first one is used as quarum disk.
--specify new cluster group name
--specify new network ip address for sql server cluster. used 192.168.1.11 since windows failover cluster used 192.168.1.10. want to have this defined to check ip addresses later.
--configure service account, authentication mode etc
--it failed one step to let me enable netfx3 feature and run it again.
--mount window 12 dvd, and run this to install netfx3 on both sv1 and sv2--I hated this
dism /online /enable-feature /featurename:netfx3 /all /source:d:\sources\sxs
--restart the installation -- this is why I hated lack of well-prepared releases.
4. add node to sql server failover cluster
--this node is already part of windows failover cluster environment, it discovers the previously created sql server cluster network name automatically to join in. give the node a name and join in the cluster network.
--cluster network configuration, it reuses the previously defined 1.11 address. good.
--much less steps, then start to install the software and configure it to be ready for use.
done
--During installation, I did not pay attention to how the database files are distributed since I have only one shared disk to use. But it seems installer automatically installed database onto shared storage so we do not need to bother to manually configure the system to identically use the files.
Sunday, February 16, 2014
Common Sense Failed to Work
I know unicode is supported by nvharchar and nxxx in SQL Server , but I also took it for granted that SQL Server would have an unicode collation for you to set as the database's default character set just as the other database do (Oracle can use UTF8 and AL32UTF8 as database level character set), since nowadays, these kinds of basic features are all kind of similar between major databases, until I started to think if a unicode collation is set as at database level, will it actually make no difference between vachar and nvarchar?
It turned out that I couldn't find an unicode collation for SQL Server, (somebody know one and tell me?) and unicode can only be supported by nxxx data types in SQL Server. The data in nxxx data types are in encode of UCS-2, which is very similar to UTF-16.
So for now, if you migrate an application that uses Oracle with UTF as its character set to SQL Server, you will have to define the corresponding string columns to be nxxx in order to support unicode.
Note: In recent versions of Oracle, you no longer be able to choose same character set for both varchar2 and nvarchar2.
It turned out that I couldn't find an unicode collation for SQL Server, (somebody know one and tell me?) and unicode can only be supported by nxxx data types in SQL Server. The data in nxxx data types are in encode of UCS-2, which is very similar to UTF-16.
So for now, if you migrate an application that uses Oracle with UTF as its character set to SQL Server, you will have to define the corresponding string columns to be nxxx in order to support unicode.
Note: In recent versions of Oracle, you no longer be able to choose same character set for both varchar2 and nvarchar2.
Saturday, January 18, 2014
Computer Storage
Simple to complex:
1. Directly attached disks/Storages (DAS), no RAID.
Most of home computers in home and office are this configure.
2. Directly attached disks, but with RAID
Provide better data protection. Most of small business's servers are having this configuration.
RAID 0: Striping
RAID 1: Mirroring
RAID 5: Block-level striping with distributed parity
RAID 10: 0 + 1.
http://en.wikipedia.org/wiki/RAID
3. File level shared storage
Network attached storage (NAS). used by home and small businesses. storage is provided by network. the NAS server has its own file systems and IP.
Accessed over network, usually IP network. OS sees NAS as file server.
4. Shared block level storage.
SAN is a type of shared block level storage. itself is a private network with storage linked with fiber channels and switches etc. it has no file systems, it's connected to computer via similar interface used in DAS, such as iSCSI. So usually a physical interface is installed onto computer to access the SAN.
Access over physical interface installed on computer, similar to network card. OS sees SAN as disk.
5. RAID vs SAN
storage exposed by SAN appears to hard drives to a computer, RAID can be built upon it.
SAN itself since composed by hard drives, can also be built upon RAID arrays.
One advantage of NAS and SAN is the storage is no longer attached to any specific computer so that you do not need to move storage from one server to another when a server has to be replaced. By sharing the storage among multiple applications, storage utilization is also supposed to be higher.
Below are useful illustrations from wiki.
1. Directly attached disks/Storages (DAS), no RAID.
Most of home computers in home and office are this configure.
2. Directly attached disks, but with RAID
Provide better data protection. Most of small business's servers are having this configuration.
RAID 0: Striping
RAID 1: Mirroring
RAID 5: Block-level striping with distributed parity
RAID 10: 0 + 1.
http://en.wikipedia.org/wiki/RAID
3. File level shared storage
Network attached storage (NAS). used by home and small businesses. storage is provided by network. the NAS server has its own file systems and IP.
Accessed over network, usually IP network. OS sees NAS as file server.
4. Shared block level storage.
SAN is a type of shared block level storage. itself is a private network with storage linked with fiber channels and switches etc. it has no file systems, it's connected to computer via similar interface used in DAS, such as iSCSI. So usually a physical interface is installed onto computer to access the SAN.
Access over physical interface installed on computer, similar to network card. OS sees SAN as disk.
5. RAID vs SAN
storage exposed by SAN appears to hard drives to a computer, RAID can be built upon it.
SAN itself since composed by hard drives, can also be built upon RAID arrays.
One advantage of NAS and SAN is the storage is no longer attached to any specific computer so that you do not need to move storage from one server to another when a server has to be replaced. By sharing the storage among multiple applications, storage utilization is also supposed to be higher.
Below are useful illustrations from wiki.
Monday, December 23, 2013
Columnstore Index
If all the columns are used to build the index, this become a copy of original table with column values stored differently, column oriented instead of row oriented.
Data in table is treated as readyonly, good for data warehousing analytic queries, not good for everyday OLTP since table with columnstore index can't be updated.
For a query that already uses most of the columns in a table, performance gaining can be limited. in this case, columnstore index's management overhead (recombining the rows etc) can be worse than the benefits it brings in.
Data in table is treated as readyonly, good for data warehousing analytic queries, not good for everyday OLTP since table with columnstore index can't be updated.
For a query that already uses most of the columns in a table, performance gaining can be limited. in this case, columnstore index's management overhead (recombining the rows etc) can be worse than the benefits it brings in.
Subscribe to:
Posts (Atom)