2
\$\begingroup\$

I've read in a text file to excel from a database and I've done it in such a way that it filters out unnecessary columns. My approach to filter rows was to use two subroutines and call the 2nd from within the first. It takes ~8 seconds for the sheet to be filtered and there is only 400 or so rows. The fact that it takes that long (even though it works) indicates my code is inefficient. If anyone has a better method I would greatly appreciate the knowledge! To delete rows I've used the following VBA:

Sub FilterAndDelete()
Dim LR As Long, i As Long
LR = Range("A" & Rows.Count).End(xlUp).Row
For i = LR To 1 Step -1
Select Case Left(Range("A" & i).Value, 3)
Case "CHA", "HAM", "BKN"
    Call FilterAndDeleteB
Case Else
    Rows(i).Delete
    Call FilterAndDeleteB
End Select
Next i
End Sub

Sub FilterAndDeleteB()
Dim Br As Long, i As Long
Br = Range("B" & Rows.Count).End(xlUp).Row
For i = Br To 1 Step -1
Select Case Left(Range("B" & i).Value, 1)
Case "-"
    Rows(i).Delete
Case Else
    'do nothing
End Select
Next i
\$\endgroup\$
2
  • 3
    \$\begingroup\$ Are you missing an End Sub or more at the end? \$\endgroup\$ Commented Jan 25, 2017 at 16:58
  • \$\begingroup\$ Yes. An oversight from copy/pasta. \$\endgroup\$ Commented Jan 25, 2017 at 20:16

2 Answers 2

3
\$\begingroup\$

I'm confused here—you are looping through FilterAndDeleteB for every row with a value in column A? Let's look at the logic:

Count rows for column A
    From last row of column A to the first row of column A
        If first 3 characters of A cell are `CHA` `HAM` or `BKN`
            count rows in column B
                for last row of column B to the first row of column B
                    If B cell starts with `-` then delete current row
        Else            
            delete current row
                count rows in column B
                for last row of column B to the first row of column B
                    If B cell starts with `-` then delete

Next row in Column A

Right, so for every single row in Column A, you will loop through every single row in column B. Once you loop through column B, you should never need to loop through it again, according to the logic. Everything beginning with - will already have been deleted.

Instead, I imagine, you'd rather look for CHA, HAM, and BKN, and for every row where one of those isn't there, if there's a - in column B, delete those rows—right? If so, consider this (untested) code:

Option Explicit

Public Sub FilterAndDeleteRows()
    Const KEEP_STRING As String = "CHA,HAM,BKN"
    Dim lastRow As Long
    lastRow = Cells(Rows.Count, 1).End(xlUp).Row
    Dim i As Long
    Dim beginningString As String
    For i = lastRow To 1 Step -1
        beginningString = Left(Cells(i, 1), 3)
        If NOT InStr(1, KEEP_STRING, beginningString) > 0 Then
            If Left(Cells(i, 2), 1) = "-" Then Rows(i).Delete
        End If
    Next i
End Sub

Now it's only one loop, and it's skipping the rows where A matches.


But the more likely place to improve the speed of your filtering is by filtering when getting data from the file. At that point, you probably have the data in an array and you probably could just move records from that array that match your criteria to a new array, which you print out at the end.

\$\endgroup\$
1
  • \$\begingroup\$ That approach is certainly more efficient, my thanks. When I have a bit of time i'll try the array method and see how much more of an improvement it is and report back :) \$\endgroup\$ Commented Jan 25, 2017 at 20:38
0
\$\begingroup\$
  1. Are you reading the data from the database via SQL? Why not include only the columns you want in the SQL, and add a WHERE clause to the SQL to filter the rows?

  2. You might also be able to use ADODB to connect to the text file itself, and once again, issue an SQL statement which will return only the desired rows and columns. Excel has a CopyFromRecordset method to paste results in an ADO Recordset to an Excel worksheet.

\$\endgroup\$

Not the answer you're looking for? Browse other questions tagged or ask your own question.