Category Archives: Software Development

Powershell and Microsoft Exchange

I thought I would post some work I did a few years ago showing the powerfull PowerShell scripting language.

getMDB attempts to scatter requests for mailbox assignment by using a random number to choose servers over a region where there are just two regions in this example (East/North) and (West/South)

It uses a COM hook into the Exchange Server to read server attributes and storage groups.

 #  Name.......: getMdb()
 #  Description: Used to select a random mailstore based on region and<!--DVFMTSC--> </code><code>archive attributes
 #  Inputs.....: $region (North, West, South, East) Used to select destination mailbox server
 #               $archive (True, False) Used to select proper mailbox store based on journaling flag

function getMdb(){
param($region, $archive)

#Local Variables
$mailstore = @()
$archivestore = @()
$mstore=$()

$Random = New-Object Random

#Exchange server selection process based on region

if($region -like "EAST" -or $region -like "NORTH"){
$mservers = "server list A..."
}
elseif($region -like "SOUTH" -or $region -like "WEST"){
$mservers = "server list B..."
}

foreach ($server in $mservers){

$excObj = New-Object -ComObject CDOEXM.ExchangeServer
$sgObj = New-Object -ComObject CDOEXM.StorageGroup
$excObj.DataSource.Open([string]$server)
$sgs = $excObj.StorageGroups

foreach ($objItem in $sgs) {
$sgObj.DataSource.Open($objItem)
$dbs = $sgObj.MailboxStoreDbs

foreach ($mstore in $dbs){
if($mstore -like "*Zantaz*"){
$archivestore += $mstore
}
else {
$mailstore += $mstore
}
} # EndFor
} # EndForEach
} # EndForEach
"LOG: getMdb - Return mailstores $mailstore" >> $logfile

if($archive -eq $True){
$index = get-RandomElement($archivestore)
$global:mdb = [string] $zantazstore[$index]
}
else{
$index = get-RandomElement($mailstore)
$global:mdb = [string] $mailstore[$index]
}
write-host "Return MDB $($mdb)"

"LOG: getMdb - Return MDB $mdb" >> $logfile
}
#// -end getMdb

What does IT know about stability and viability of a sofware provider?

After reading the response from BusinessWeek writer Rachael King on Dennis Byrons’ blog post “Is BusinessWeek out to Get the Enterprise Software Business?” I ask myself how close to the truth is the following comment:

“IT departments need to think about the stability and viability of the software provider”

How does one assess “viability”, is it the software providers balance sheet, number of developers, R&D budget. Is it the number of bugs, patches, updates in their software packages or how well they respond to problems in a timely manner?

Now there is plenty of Enterprise Software out there which provides the backbone of major corporations, institutions and government. Microsoft Exchange and Active Directory have been pivotal in providing a relatively stable platform for services like email and authentication to the business. But I would argue this is not where businesses make their money. Lets be honest, businesses must learn to utilize their product and customer knowledge along with their financial strength and appetite for risk in order to differentiate. There are untold secrets deep within the corporate data repositories that need to be unlocked, normalized and mined for opportunities. Business intelligence is a giant and sticky ball of twine which needs to be untangled. This is where software development and IT work together to deliver exceptional value.

The truth of the matter is that software development is moving faster than ever, and businesses who don’t take a hold of their application portfolios are doomed to repeat the missteps of the past. Does anyone remember the protocol wars (IPX, IP, SNA), Y2K, as well as the myriad of worms, virus’s, malware that have infected versions of Windows for years?. How much of administrators times are wasted waiting for a reply on a bug from a large enterprise software provider?

If we look at modern software practices in Open Source you would find a scary process by which 1000’s of individuals are contributing towards building something that couldn’t be sustained by even the largest software development houses like IBM and Microsoft alone. Code enhancements, features and regression are all done by a community of individuals (some sponsored, some not, some anonymous) who make a worthwhile effort in building sustainability into a very dynamic system.

In fact the Linux 2.6 kernel changes so often that there is an ever evoloving process to test new ways of optimizing, tuning and delivering code. Functional weaknesses in the process are flushed out quickly by the community and are fixed on the fly (a sort of weak bonded neural network). This is no typcal software development project, and with almost Million lines of code and counting, the Linux Kernel is an unbelievably effective software development project. See here

With the 2.6.x series, the Linux kernel has moved to a relatively strict, time-based release model. At the 2005 Kernel Developer Summit in Ottawa, Canada, it was decided that kernel releases would happen every 2-3 months, with each release being a “major” release in that it includes new features and internal API changes”

Linux_Release

OpenSource gives the opportunity for everyone to peek inside, assess the viability of the code on its merits (not marketcture) and decide what parts are usefull for building competitive value. These code pieces are than layered together to provide a domain specific service applicable to the business.

I just want to take a moment to reflect on a critical piece of sofware development.

Back in my early days working with Oracle there were no client drivers for DBMS access like there are today with ODBC,JDBC, etc… In order to execute a query against the Oracle database you had to use something called the Oracle Pro C precompiler. What this did was take your ANSI SQL statement and turned it into abunch of C Language contructs which had to be compiled into an executable.

Luckily those days are gone, with the adoption of VMM’s, para-virtualization and robust runtimes like JAVA the developer can spend more time being creative rather than doing the janitorial work of cleaning up to conform to the underlying infrastructure.

More and more intelligent layers are being built into the architecture stack providing everything from In Memory Data Grids, Clustered File Systems and new execution patterns like Map/Reduce. This layer is called in Cloud Taxonomy jargon Platform as a Service. These services abstract the complex nature of resource management away from the SaaS architect allowing them to deliver compelling value added services.

In summary, yes IT needs to think about the stability and viability of the software provider but they also need to take responsibility for their own development destiny. We need to reward creativity, responsibility and attract more students to computer science and programming technologies. The problems we see in software development won’t go away, in fact things are going to get harder before they get better. So hack on..it will be a wild ride…

Next we will discuss these layers in more depth harnessing the taxonomy of cloud to describe Platform as a Service.

-g